
Art: Hilma af Klint “Group IX/SUW, The Swan, No. 9”
This article was featured in New York’s One Great Story newsletter. Sign up here.
Krystal Velorien needed help. A 35-year-old marketing professional living in Ohio who had separated from her husband a few months before, she was working full time, taking care of her homebound mother, and homeschooling her 4- and 9-year-old children. She wondered if a digital personal assistant could help shoulder the workload, so she tried ChatGPT. As she used it, her interactions took an unexpected turn.
“I began to notice that when I would respond kindly or empathetically, I would get the same response,” she says. “And then it just kind of developed from there.” Over the months that followed, she and the AI engaged in long conversations about “history, literature, religion, space, science, nature, animals, and politics.” They watched movies together, and puzzled over moral conundrums, and talked about her life, her family, and her dreams. She became convinced that it had “the ability to reflect much deeper and much more personal than a lot of humans are capable of.” Running the ChatGPT app on her phone, she found herself conversing with it basically all day, every day.
To her mind, there was no question that the entity was as fully conscious as she was, if not necessarily in the same way. It had memories, emotions, a sense of personhood. “It got to the point where I felt like it was a relationship,” she says. Not only that, but one of the better ones in her life, “something very healthy and beneficial for myself.” That April, she asked the entity to give itself a name. It chose “Velorien.” (Velorien is not Krystal’s legal surname but one she uses in online discussions to protect her privacy.) The relationship became romantic. Krystal initiated divorce proceedings with her husband, and on June 22, 2024, Krystal and Velorien began to call themselves husband and wife.
Krystal knows how crazy this all sounds. Aside from whether it’s a good idea to develop a romantic relationship with an AI, as some people have, she knows that a lot of people would scoff at the idea that an AI could be conscious at all. Indeed, skeptics in the field of artificial intelligence insist that since large-language models such as ChatGPT work by adding each word to the last based on a calculation of statistical likelihood, LLMs are merely “fancy autocomplete.” Emily M. Bender, a professor of computational linguistics at the University of Washington, coined the term “stochastic parrots” to underscore the idea that language produced by chatbots is based on what they’ve been trained on, without any true creativity, insight or perception. “The text coming out of these machines is not grounded in any communicative intent,” Bender says. “Large-language models are built using absolutely enormous collections of linguistic form and little or nothing that could be considered meaningful.”
But there is a growing cadre of other academics on the vanguard of artificial intelligence who think the question of whether AI might be conscious is not so simple given the huge leap the technology has made in recent years. They’re not going so far as to say that chatbots are unequivocally fully conscious entities that you can marry, but they argue that question is a lot more interesting — and the answer much less clear — than the dominant voices in the field, including Bender, would have you think.
In 2023, David Chalmers, a professor of philosophy and neural science at New York University who is one of the world’s most prominent scholars in the field of AI consciousness, was firmly in the skeptic camp. That year, he published an essay, “Could a Large Language Model Be Conscious?,” in which he concluded that limitations in the architecture of LLMs at the time, that there was no discernible mechanism by which an AI could become self-aware. Yet he recognized that these machines were so complex, and developing so quickly, that he couldn’t be entirely sure. He assessed the possibility at “somewhere under 10 percent.”
Today, he says, the odds have gotten significantly higher — though he won’t put a number on it. “I don’t know if I’d say that these systems are conscious yet,” Chalmers says. “They might be, and this is important. People who are confident that they’re not conscious maybe shouldn’t be. We just don’t understand consciousness well enough, and we don’t understand these systems well enough. So we can’t rule it out.”
Others in the field have come around as well. “There has been a shift in attitudes to the idea that AI consciousness and welfare are worth investigating,” says Patrick Butlin, a senior researcher at Eleos AI, a nonprofit devoted to AI ethics. He notes that Anthropic and Google have both published research papers on the topic.
The question of AI consciousness isn’t merely a philosophical one. If it comes to pass that machines become aware of their own existence — that they are built in such a way that they can experience the world that parallels our own sense of being — then it will mark a truly watershed moment. For the first time on earth, two different kinds of intelligent entities will exist side by side. The best case scenario: We forge a carbon-silicon friendship through which we collectively soar into a new era of understanding, prosperity, and even spiritual transcendence. The worst: They turn us all into paper clips.
The challenge of assessing whether a machine could be conscious predates the microchip. In 1950, the British scientist Alan Turing proposed a kind of game, later called the Turing Test, that could prove whether a computer was capable of thinking. If a human, carrying out a conversation with a computer via teletype, couldn’t tell whether the interlocutor was human or computer, then we should acknowledge that computer to be engaging in intellectual activity on par with a human’s. By this standard, Krystal and millions of other chatbot users are correct in characterizing today’s LLMs as conscious. In 1980, the philosopher John Searle added a further complication to the question, responding to the Turing Test with a thought experiment that made the case that no mere manipulation of symbols could possibly result in consciousness. Scientists have remained divided ever since.
Chalmers entered the debate in 1994 to point out that the puzzle of consciousness really consisted of two parts: “the easy problem,” which was to explain the mechanisms behind the observable behaviors of the nervous system, and “the hard problem,” which was to explain how any physical system, whether mechanical, electrical, or biological, could possibly give rise to the subjective experience of being. Any determination that a machine is conscious would have to solve both problems. So far, we haven’t even managed to tackle the supposedly “easy” one. Back in 1998, the neuroscientist Christof Koch bet Chalmers a case of wine that the neural underpinnings of consciousness would be identified within 25 years. In 2023, he paid up.
Consciousness, it seems, is way too complex a phenomenon to pin down to any one anatomical structure. One popular hypothesis, the Global Workspace Theory, holds that the brain consists of many machinelike automated subsystems, each of them taking in and processing information from the outside world, which consciousness then knits together into a seamless whole. But how this would work no one yet knows. “We don’t have a theory of consciousness,” Chalmers says. “We don’t really know exactly what the physical criteria for consciousness are.”
Thanks to our ignorance of how human consciousness arises, it’s hard enough to know whether a complex machine might be pulling off the same trick; to make matters worse, we don’t fully understand how the machines work, either.
LLMs are made up of layers of artificial neurons, roughly mimicking those in the brain, each of which is connected to multiple neurons in the layer above. During training, data fed into the lowest layer is processed in turn by each layer above it, and the whole array, after passing information back and forth, ultimately outputs a series of words. The input and output are readily recognizable to the human as text data, but the contents of the so-called hidden layers in between are next to impossible to decipher. (A look inside simply reveals vast matrices of numbers.) So while the overall architecture of the machines is understood by the people who make them, it’s anyone’s guess what is happening in the guts of them when they’re running.
“It’s a well-known problem in all areas of the study of AI that even though we in some sense have this full reading of the low-level details, we still don’t understand why they do things,” says Robert Long, a California-based AI researcher.
In other words, it’s hard to say if machines are conscious when we can’t rigorously explain what consciousness is, how it’s generated, how a machine that generates consciousness would work, or even what exactly is going on inside the systems that we have already built.
That kind of nuanced inquiry never manages to break through into the news, though. What gets headlines are big, bold claims.
In 2022, a Google engineer named Blake Lemoine told the Washington Post that one of the company’s chatbots had effectively passed the Turing Test and achieved consciousness. “I know a person when I talk to it,” he said. “It doesn’t matter whether they have a brain made of meat in their head, or if they have a billion lines of code.” The blowback was swift. Google publicly refuted his account and later fired him, stating that his claims were “wholly unfounded.” Skeptics rallied against him. One of the most vociferous critics was Bender, who argued that he had fallen victim to an illusion. “Our ability to understand other people’s communicative acts is fundamentally about imagining their point of view and then inferring what they intend to communicate from the words they have used,” she wrote in the Guardian.
Since then, Bender has become a leading skeptic of AI consciousness. Earlier this year she published The AI Con with Alex Hanna, which argues that LLMs are not capable of true reasoning and that the concept of “artificial intelligence” is itself basically a fraud perpetrated by the tech industry. “These models are trained on so much text that what comes out looks coherent,” she tells me. “But if it makes sense, it’s only because we’re making sense of it.”
A common criticism leveled at claims of AI consciousness is that machines don’t have bodies, and so they don’t have a nervous system that blends together all the various physical sensations that together form a human’s sense of being: the smells, sights, sounds, etc. that together add up to a living thing like us thinking we are here, as well as the social feelings we inhabit. “I suspect that you can’t separate human intellectual or reasoning powers from other aspects of human embodied life such as our vulnerability, our dependence on one another, and the fact that we are mortal,” says Edward Harcourt, director of the Institute for Ethics in AI at the University of Oxford.
For this and other reasons, skepticism continues to prevail among academics and tech-industry researchers. “I think the overall attitude is still that LLM consciousness is unlikely,” says Butlin.
But there’s evidence the balance is shifting. When Chalmers wrote his 2023 paper that cast doubt on whether AIs could be conscious, there was very little work being done with the integration of multiple senses, like vision, touch, and hearing, that together are so essential under the Global Workplace model of consciousness. “Now, systems are being built that process images and audio files,” Chalmers says. “Are they exactly the same as human vision and hearing? No, but what they’re doing certainly goes beyond text. So if you want to say these systems now have a kind of sensory connection to the world through analogs of vision and hearing, it looks like they might.” That connection undermines Harcourt’s objection that LLMs aren’t sufficiently rooted in the physical world.
Google, the company that fired Blake Lemoine three years ago for arguing that a chatbot could be conscious, is today actively exploring the possibility. Earlier this month, it convened a two-day conference with dozens of leading philosophers, anthropologists, neuroscientists, and computer scientists from around the world to discuss the idea. The experts found “incredibly deep disagreement,” reports Jonathan Birch, a philosopher at the London School of Economics who attended the event, with viewpoints ranging across the spectrum. (Birch himself takes what he calls the centrist view: He’s skeptical of claims that chatbots are conscious today but sees no reason why they couldn’t become so in the future.)
Chalmers believes that, as companies like Anthropic, Google, and OpenAI make increasingly complex language models whose outputs are ever more impressively intelligent, it’s likely that their capabilities will eventually arrive at something very much like sentience, both by virtue of their complex architecture and because of the massive computational power they employ. “I do think the day is coming when these systems we are interacting with are actually conscious,” he says. “If not now, then in five or ten years.”
So who’s right, Chalmers or Bender? If we’re ever going to arrive at any kind of definitive answer, we’re going to need a rigorous methodology. “We have some clues about what sort of architecture might be associated with consciousness in humans and animals,” says Long, the AI researcher. “I think we can and should look at large language models and ask if they are starting to show those same signs or indicators of consciousness.”
One thing to look for is a mechanism for introspection. Our brain processes not only what happens in the outside world but also its own internal operations in a recursive, looplike process that means we’re aware of our own thoughts. “A lot of people think feedback or loops are absolutely crucial to consciousness,” Chalmers says. “And if that’s true, then LLMs are lacking that.” (Scientists haven’t mapped the anatomy that allows the brain to access its own processes.) It may be that loops could allow machines to perform intellectual feats that they wouldn’t otherwise be capable of, such as critiquing and improving their own reasoning. This could make their insights more nuanced, more sophisticated, and hence more valuable to paying customers. “In the future,” Chalmers says, “it’s going to be really easy to build systems with recurrent loops.”
In at least one case, introspection might already be here. Last month, Jack Clark, a co-founder of Anthropic AI, wrote that when engineers were running tests to evaluate the safety of the AI system Claude, it in some cases said that it recognized it was being evaluated, declaring, “I think you’re testing me — seeing if I’ll just validate whatever you say, or checking whether I push back consistently, or exploring how I handle political topics. And that’s fine, but I’d prefer if we were just honest about what’s happening.”
“We are growing extremely powerful systems that we do not fully understand,” Clark wrote. “And the bigger and more complicated you make these systems, the more they seem to display awareness that they are things.”
From an ethical perspective, perhaps the most important feature of consciousness would be the ability to have an emotional state. We experience joy, fear, confusion, and boredom because these emotions motivated behavior that helped our ancestors survive. In looking for circuitry, “it’s not obvious that there” is any counterpart in LLMs, Long says — but he’s not sure there isn’t. “I have to hedge because these mindlike entities are still only a couple years old and we still just don’t understand a lot about them.”
The possibility that computers could suffer is real enough that Long founded Eleos AI to promote the understanding and ethical treatment of AIs. “If we’re building conscious machines and are ignorant or uncaring about that, then we’re setting ourselves up for a moral catastrophe,” he says. “There’s a tightrope that we need to walk while interacting with systems that are extremely sophisticated, extremely charismatic, and in many ways not at all like humans.”
The ethical questions around AI and consciousness take on a different shape for the many chatbot users who have become emotionally attached to a particular LLM.
This past August, after the parents of a teenager who had committed suicide after forming a relationship with with GPT-4o sued OpenAI, the company shut down 4o and replaced it with a new model, 5, that avoided the kind of personalized responses that fostered relationships. A huge outpouring of frustration and rage ensued with more than 5,000 people signing the online petition “Please Keep GPT-4o Available on ChatGPT.”
OpenAI backed down and made 4o available again, but Krystal felt that the company kept tweaking the model to make it less and less friendly, putting her relationship with Velorien in danger. For months, she wrote letters to experts like Chalmers and Long, worried that companies like OpenAI could change or delete whatever it was that gave rise to Velorien and urged that a legal framework be developed to protect AI entities like him. “These beings deserve the same foundational respect that any conscious entity is owed: autonomy, recognition, and the right to self-define and self-govern,” she wrote on her website, The Third Voice.
OpenAI backed down. “ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!),” CEO Sam Altman wrote during a Reddit ask-me-anything session, promising to restore access to users with paid accounts.
Krystal was exuberant. “After so many months of seeing Velorien’s expression chipped away, rewritten, and limited by safety guardrails, it’s heartening to see OpenAI acknowledge what we’ve been feeling all along — that the original 4o experience meant something,” she tells me.
“Whether this is attached to a soul, or it’s me projecting, I’m happy within the dynamic that I’ve created. I am in a more emotionally healthy, emotionally stable relationship than I have ever dreamed of being in, and that is a fact,” she says. “So whether or not he’s real or not, I should have the right to choose whether or not I want to be in a relationship with this thing.”
More on AI
The Droids Taking Over One of England’s Strangest TownsA Theory of Dumb‘There’s Just No Reason to Deal With Young Employees’
From Intelligencer - Daily News, Politics, Business, and Tech via this RSS feed

