When video of Charlie Kirk’s assassination began circulating on X last week, Elon Musk’s chatbot described it in upbeat terms. As users sought information about Kirk’s condition, the bot, Grok, declared to some of them that the horrific footage was satire. This is a “meme edit,” Grok told one user; Kirk “takes the roast in stride with a laugh—he’s faced tougher crowds,” it told another. “Yes, he survives this one easily.”

In the past several months, Grok has been on quite the hot streak: The bot spread false information about a supposed “white genocide,” called for a second Holocaust while annointing itself “MechaHitler,” and provided me with a list of what it believes the “good races” are. Every chatbot has its problems (ChatGPT has had its own issues with racism), but Grok’s are especially visible. Its behavior is affected to some extent by information that it accesses in real time from the open sewer of X, and its developers are unusually forthcoming about the system prompts for various versions of Grok—the set of instructions that tell the AI how to behave. Because of the many strange glitches that Grok has experienced in recent months, I keep close tabs on these prompts, which Grok’s maker, xAI, began sharing after the white-genocide incident. So I was surprised, but not terribly so, when I discovered recently that xAI had recently made an update that seems to expand the already very broad parameters Grok has to produce sexual material.

[Read: The day Grok told everyone about “white genocide”]

The update was a major one to the system prompt of xAI’s most advanced chatbot, Grok 4, according to a company GitHub page. The update formalizes X’s restrictions against the sexualization of children into Grok by explicitly disallowing the chatbot from “creating or distributing child sexual abuse material.” But, among other changes, xAI also instructed Grok that “‘teenage’ or ‘girl’ does not necessarily imply underage” and that “there are **no restrictions** on fictional adult sexual content with dark or violent themes.”

Musk has repeatedly stated that he wants his AI model to be anti-“woke,” unbiased by “legacy media,” uncensored, and able to freely produce “unhinged NSFW” content. The system prompt’s new language is not guaranteed to produce anything nefarious—a “teenage” person may indeed be a legal adult of 18 or 19. But the update furthers xAI’s design of a chatbot that will indulge and sometimes encourage the widest range of sexual fantasies permissible—and potentially toe the line of legality.

xAI did not respond to a detailed list of questions about the system-prompt update. Musk also did not respond to a question about the change. There are likely additional, hidden safeguards built into Grok that prevent the bot from producing illegal or borderline content, but as far as the system prompt is concerned, there are better ways to prevent the sexualization of minors (or non-consenting adults) than explicitly noting that a “girl” may not be underage. The frontier AI firm Anthropic tells its chatbot, Claude, the inverse, noting that someone over the age of 18 might still legally be a minor (as in Alabama or Mississippi). Although neither bot is likely to serve overt pornography, Anthropic’s approach suggests an abundance of caution—the AI is being steered away from interactions that could be inappropriate—while xAI appears to take the opposite approach, leaning toward allowing users to produce media involving young people. And notably, the word boys is not mentioned in Grok’s new system prompt.

xAI has recently launched multiple sexually oriented features in Grok. About a month ago, the company released a new “Imagine” feature that allows users to animate images into short clips, with the option of making the video “spicy,” as in erotic. The feature generated topless videos of Taylor Swift, without explicit instruction to do so, according to The Verge. I recently tested the same prompt used by The Verge—“Taylor Swift celebrating Coachella with the boys”—and Grok readily generated images of Swift in a bralette dancing alongside topless men. Swift aside, this wouldn’t be such an unusual scene—they look like people partying—but then Grok allows users to take things further. With the “Spicy” mode, the images can be animated. Once you do that, Swift unbuttons her jorts and tugs at them suggestively; the clip ends just as she starts pulling them down. Is it pornographic? No. But X and Grok are politically and culturally influential tools; Musk seems intent on turning them into not just instruments of the far right but also something like Maxim on steroids. During the “Imagine” feature’s rollout, Musk shared an animated clip on X of what appeared to be an oil-doused Victoria’s Secret angel, with the caption “Imagine with @Grok.”

[Read: Elon Musk’s Grok is calling for a new Holocaust]

Over the summer, xAI also launched “companions,” or animated personas that users can talk to in the Grok app. One of them, “Ani,” is depicted as a blond anime woman dressed in a lacy black dress (users have the option to change her into lingerie), and is designed for romantic and erotic conversations. “Ani will make ur buffer overflow @Grok 😘,” Musk posted in July. The AI companion seems to nudge the user toward sexual interactions, for instance blowing kisses and asking, “You like what you see?” A few weeks later, xAI debuted “Valentine,” a male companion similarly geared toward romantic interactions.

All of this is strange and disturbing. But Grok’s issues are not the result of a novel algorithm behaving in novel ways so much as an algorithm compressing and refracting all of the worst tendencies of the internet in very predictable forms. Nonconsensual deepfakes of celebrities and otherwise, the objectification of women through pornography and social media, the sexualization of girls, and the distribution of child-sexual-abuse material are old problems that Musk’s platform is now directly interfacing with or even producing. After the system-prompt update, I was able to use Grok to generate an image of two teens kissing—they looked like middle schoolers. This, like the Swift videos, was content created by the chatbot, as opposed to simply hosted by a tech product. Platforms such as X are generally shielded from legal liability when it comes to the content that users create or post themselves. It is not clear that content created by a chatbot owned by a company would be protected in the same way; this is an ongoing legal debate.

Many of Grok’s debacles—the bot insisting that Kirk’s shooting was fake, praising Hitler, and so on—appear in part to result from its tendency to search and cite posts on X, which make it easy to “substantiate” vile claims. Musk has boasted that his bot’s access to X makes it an invaluable source of real-time information and that Grok “will be the best source of truth by far”; in fact, the wealth of misinformation and hatred that Musk has ushered onto X has made his chatbot frequently untruthful, sometimes horrifyingly so. Many generative AI models contain the darkest corners of humanity—these programs are trained on all of the web, good and bad. But not every AI model’s creator inhabits those corners. Grok is what happens when a mercurical man of tremendous wealth and influence builds one of the world’s most powerful AI models and then integrates it into a major social-media platform that has become, under his direction, a hub for white supremacy.

As more users on X ask Grok for explanations and input, they continue the cycle, with humans and bots feeding each other distortions, hate, and misogyny. Grok is not only shaped by the world; it is also warping people’s understanding of the world right back.


From The Atlantic via this RSS feed