A few months ago, I was in my living room, TV on in the background, when I heard the first few notes of a familiar song. It was “Fool” by Perfume Genius, a track about the tenuous, often exploitative relationship between gay men and straight culture. I’d listened to it hundreds of times before, always moved by how it evokes a complex world of human emotion—of feeling both venerated and cast aside, exalted and objectified. I looked up at the screen and my stomach dropped. It was an ad for ChatGPT.

The sheer cognitive dissonance—hearing a song so deeply human, so personal and reflective, used to advertise a chatbot—was unsettling. What was it doing in a commercial for a technology that I fear is only good for alienating us from ourselves?

The ad was part of a campaign by OpenAI that launched last year, each of its seven installments showing how ChatGPT might be integrated into daily life. In the spot titled “Dish,” a young, curly-haired white guy in an apartment makes dinner for a pretty brunette white gal, giving her a taste of pasta. The woman takes a bite, thinks for a second, brow furrowed, and says it’s “really good.” Then the music swells, and a single line is superimposed on the screen:

I need a recipe that says, “I like you, but I want to play it cool.”

Our protagonist’s prompt to ChatGPT is followed by a long answer extruded by the chatbot, directions we are meant to believe are now helping him impress his date. The other stars of OpenAI’s campaign take on similarly mundane challenges. One guy strains to do a pullup; a student tries to focus on her work; two teenagers attempt to fix their dad’s truck. Framed in close and shot on ­35-millimeter film, wearing outfits cleverly lacking the trend markers of our moment, these people seem less like envoys from some AI utopia and more like old-fashioned main characters—it just so happens that their hero’s journey would not be possible without ChatGPT.

The campaign marks a shift in how AI companies are presenting their products. Older ads for chatbots didn’t shy away from showing the technology: In one, a man asked Google’s Gemini to write a letter from his young daughter to an Olympian the girl admires; in another, a woman asked Meta’s AI for help organizing a book club meeting to discuss Moby-Dick, which we come to understand she likely hasn’t read. But in OpenAI’s latest spots, the interaction with the technology itself is conspicuously missing. We don’t see the protagonists using a phone or a computer. We don’t see anyone struggle, stop, consult the bot, then return to their activity. The chatbot’s presence is unseen and entirely frictionless; it is an organic part of the process of trying to do anything at all.

Over the last few years, as the race to dominate generative AI has consumed an enormous amount of capital, companies have been trying to figure out how to market their products to a potentially vast consumer base. It hasn’t been a straightforward process. ­Google ­eventually pulled the Gemini letter-­writing ad from the air because the reaction to it was so poor—why would anyone choose to outsource that kind of thing to AI, thereby robbing themselves of the opportunity to spend quality time with their daughter, and the child of the chance to express herself? That ad was mentioned in a New York Times article that asked, “Why Does Every Commercial for A.I. Think You’re a Moron?” We, it seems, don’t like being treated as if a computer program is smarter than we are, or the implication that every task is equally rote and mundane. Some things, like fostering your child’s burgeoning passions, hold deeper meaning, and the point isn’t merely to get them done—it’s to do them.

This technology, these campaigns say, is not at all like the technologies you fear. In fact, you have nothing to fear.

But audiences’ discomfort with AI marketing goes beyond feeling insulted. One recent survey by the Pew Research Center found that only 17 percent of Americans believe AI will positively affect their lives, while another found that 53 percent believe it will worsen creative thinking, and 50 percent think it will harm people’s ability to form meaningful relationships. Such qualms rest, in part, on the experience of the last dozen years, as the harms of social media have cast a pall on the unbridled techno-optimism of the early 2000s. A meta-analysis of 71 psychology studies, for example, showed that consumption of short-form video damages cognition, attention spans, impulse control, and overall mental health. This won’t be surprising to anyone who’s ever gotten sucked into a phone trance and emerged from it dumb and disgusted. Knowledge, both scientific and lived, of social media’s deleterious effects is finally prompting people to fight back—trading their smartphones for flip phones, bricking or gray-screening them, or developing ways to pay more attention.

This upswell of anti-tech sentiment feels like a backlash after the pandemic’s physical isolation led many of us to spend more time tethered to digital tools. And it’s being exacerbated by anxieties that generative AI will decimate the labor force far sooner than we can adapt. Even tech workers, as the New Yorker’s Kyle Chayka reported, worry that AI will trap them in a “permanent underclass.” Anthropic CEO Dario Amodei, in a recent essay, speculated on ways that we might “buy time” before the possibility that AI enslaves or destroys humanity.

But meanwhile, AI companies have products to sell, and marketing ­campaigns that use a faux indie film aesthetic can make generative AI seem cozy and helpful, not predatory and dystopic. As Vauhini Vara, author of the 2025 book Searches: Selfhood in the Digital Age, told me, “It can’t be coincidental that these ads that normalize AI use come at a time when people are talking a lot about how abnormal they find it all.”

It’s not just OpenAI: Anthropic recently produced a campaign for Claude that is both techno-optimistic and pseudo-­humanist. It begins with a quick sequence of video clips—a falling piano, a city blacking out, an ambulance speeding through streets blaring its siren—as a voice repeatedly intones, “There’s never been a worse time.” But then this barrage of images gives way to a slower montage; now, people are running, playing music, fixing bikes, climbing in the desert, taking a dance class, launching a rocket into the air. Sometimes they look at screens—but overwhelmingly, they are engaged in the physical world. Claude is represented not by computers or phones, but by white line graphics tracing people’s routes up the boulders, the paths of their limbs as they dance, and the airflow coming in and out of their lungs. Now, the voice says, “There’s never been a better time…to have a problem, to be stuck, to be overwhelmed, to be impatient, to be out of ideas, to be out of your depth, out of breath.”

Alex Hanna, co-author with Emily M. Bender of the 2025 book The AI Con, told me these ads might signal a resurgence of the push for “ubiquitous computing.” This movement—spearheaded in the late 1980s by Mark Weiser, then the chief technology officer of Xerox’s famed Palo Alto Research Center—sprang from the idea that technology would evolve only if its developers sought to make it an indistinguishable part of the environment. Anthropic has taken this concept to extremes; at a recent “zero-slop” pop-up in New York City’s West Village, the company disallowed phones, an attempt to further distance its product from things (slop, phones, screens) that carry negative connotations and to imply that, somehow, AI products are just in the air. Meta’s Super Bowl ad campaign gestured toward a similar concept, showing people engaged in various activities while wearing Meta-powered glasses, interacting with the chatbot without having to stop whatever they’re doing. (“Hey Meta, is it okay to eat mud?” asks a cyclist after she crashes on a dirt trail.)

This technology, these campaigns say, is not at all like the technologies you fear. In fact, you have nothing to fear.

We’ve been here before. In 1958, IBM asked designers Charles and Ray Eames to produce a short film about the computer. At the time, the company’s room-size computing machines were commonly associated with nuclear weapons systems. IBM needed a way to make its product more palatable, and the Eameses, with their modernist optimism in the power and possibilities of technology, were the ideal candidates to create such propaganda. They made a film titled The Information Machine, kicking off a partnership with IBM that lasted almost 20 years. Their films were odes to the computer; in the best-known one, Powers of Ten, the Eameses exponentially zoom out and then back in on a single point on Earth, illustrating and demystifying computers’ processing power. They were also paeans to the creative capacity of humans. They used decidedly approachable aesthetics—The Information Machine features animated characters with round, friendly faces—to explain, and explain away, something that scared people.

I see the clear influence of these films’ techno-positivist aesthetics in the line graphics in that ad for Claude. But the Eameses could be forgiven for not imagining the kind of existential damage digital computing would eventually cause. Today’s tech CEOs, who fret about what an all-powerful generative AI might do even as they do their best to hurtle us all toward it—not so much. Seeing ChatGPT ads that feature beautiful, vaguely nostalgic music (“Brother Love’s Travelling Salvation Show” by Neil Diamond, “Someone Somewhere” by Simple Minds) playing over warm film shots of people outsourcing their thinking to a chatbot has the same disorienting effect I experience when I look at an AI-generated video: the sense that I’m looking at a hallucination rather than at something substantial. The feeling that I’m being sold not just a best-case scenario, but a total impossibility—and that the mere act of looking at it is corroding my relationship to reality.

Of course, ads are rarely realistic depictions of their products. No sales pitch is going to tout how, for example, AI may contribute to cognitive ­atrophy. Because AI companies are “not able to recoup the revenue that they need to keep up with capital expenditures now,” Hanna tells me, they’re using ads to drum up a market in hopes of arriving at an “iPhone moment” of widespread adoption. These ads are an attempt to make generative AI products seem indispensable; in order to do so, they have to destabilize our relationship with what we know—“Is it okay to eat mud?”—and thereby who we are.

What are generative AI companies really setting out to do? A clue might lie in the way they advertise their products to each other. In October 2024, Artisan, a company that builds AI programs for business automation, launched a billboard campaign in the San Francisco Bay Area with the slogan “Stop Hiring Humans.” Other business-to-business AI advertising campaigns are famously inscrutable, but here was an AI company admitting fully what many fear is the technology’s ultimate aim.

It’s not hard to infer a similar ­worldview from ads aimed at a general audience—even those cloaked in a humanistic veneer. If we follow the internal logic of those ChatGPT ads to their likely conclusion, no one will ever again consult a cookbook, work with a personal trainer, ask a teacher for advice, read a guidebook, or learn a trick or two from a friendly mechanic. The process of creating a market for AI might be as dehumanizing as the technology itself.


From Mother Jones via this RSS feed