Photo-Illustration: Intelligencer; Grokipedia

In 2021, somewhere near the peak of his pre-political celebrity, Elon Musk tweeted to celebrate a milestone for the web: “Happy birthday Wikipedia! So glad you exist.” His public relationship with the platform had been, up until that point, fairly normal, at least for a controversial public figure. He was an avid consumer, frequently tweeting links on a range of topics. His occasional criticisms of the platform were about how it represented him*. “*History is written by the victors,” he wrote in 2020, “except on Wikipedia haha.” A year earlier, he’d complained about his own entry. “Just looked at my wiki for 1st time in years. It’s insane!” he wrote, bemusedly calling his page a “war zone” with “a zillion edits.” In response to a supportive comment, he joked: “Some day, I should probably write what *my* fictionalized version of reality is 🤣🤣.”

Six years, nearly $500 billion, and one extremely public political transformation later, well, “🤣🤣” indeed. The newly launched Grokipedia, an AI-generated encyclopedia with more than 800,000 entries, will be, according to Musk, a “massive improvement over Wikipedia,” which he has referred to more recently as “Dickipedia” and “Wokipedia,” characterized as “broken,” and accused of being an “extension of legacy media propaganda.” Since 2019, Musk’s narrow problem with Wikipedia has grown into an expansive grievance, transforming from a personal affront to a righteous crusade that’s “necessary” for humanity’s goal of “understanding the Universe.” Maybe so. Or maybe it simply didn’t make sense to one of the wealthiest and most powerful people in the world that others — be they volunteer Wikipedians, paid members of the media, or users on a platform he doesn’t own — should be able to talk about him, describe things he cares about, and be taken seriously.

Wikipedia is a propaganda arm for the Democratic Party

— Libs of TikTok (@libsoftiktok) September 30, 2025

Musk’s particular desire to remake the information environment around him is as unique to the man and his position as are his available methods (buying a social-media company; starting an AI company; creating a chatbot in his image and commanding it to rewrite the entire encyclopedia). It’s also a preview of an experience that AI tools will soon be able to offer to almost anyone: the whole world reinterpreted to their preferences, or the preferences of a model, in real time.

But first, what did Musk actually create here? Superficially, Grokipedia is true to its name: Its articles are written and formatted like Wikipedia’s and in some cases even contain passages of identical text. They’re often much longer, though, and organized less consistently than on Wikipedia. As someone who has spent a lot of time testing AI deep-research tools, I find Grokipedia’s longer articles to be instantly recognizable as the outputs of a similar process: an AI model that crawls an index of links, synthesizes their contents, and produces a comprehensive-looking but verbose report. (An early systematic comparison by a researcher at Trinity College, Dublin, suggested that “AI-generated encyclopedic content currently mirrors Wikipedia’s informational scope but diverges in editorial norms, favoring narrative expansion over citation-based verification.) They aren’t directly editable, at least in the Wikipedia sense, but you can suggest changes or corrections through an interface similar to X’s Community Notes.

Illustration: Grokipedia

Grokipedia’s articles are also clearly influenced by the encoded sensibilities of Grok, the Musk “anti-woke” ChatGPT competitor famous for once referring to itself as “mecha-Hitler.” On many subjects, it offers fairly straightforward and uncontroversial summaries of publicly available materials; on more contentious ones, it resembles a machine-assisted, post-MAGA Conservapedia, with explicit pushback against “mainstream” narratives and media coverage. In its post-launch review of the platform, Wired reported that notable entries frequently “denounced the mainstream media, highlighted conservative viewpoints, and sometimes perpetuated historical inaccuracies.” Inc instantly found a bunch of factual errors, while SFGATE concluded, “boy, is it racist.” I’d add that its more controversial articles often contain more text than anyone is likely to read, creating less of an impression of ideological certitude or confident revisionism than a sense that, well, Hey, who can really say what happened on January 6 after someone may or may not have won the American presidential election? In between, you get a lot of stuff like this:

Grokipedia rips off directly from Wikipedia, word for word, formatting, structure, the whole thing. pic.twitter.com/HUVIgh5Swg

— Dave Jones (@eevblog) October 28, 2025

Grokipedia can be understood as a straightforward attempt to automate the labor and tune the bias that goes into producing a resource like Wikipedia; indeed, there might even be some lessons for the platform here as we enter a world where chatbot users can produce Wikipedia-like articles on demand. But an automated Wikipedia isn’t much of a Wikipedia at all: The site Grokipedia is trying to replace is the result of an unprecedented bottom-up phenomenon in which millions of people contributed time, attention, and effort to create a shared resource, synthesizing existing information through a messy, flawed, but ultimately deliberative and productive process. In contrast, Grokipedia is a top-down effort, generated by a model trained on resources like Wikipedia, then deployed to rewrite them with a different sensibility. It’s a futuristic example of AI automation, a regressive throwback to pre-web centralization, and a new piece of a claustrophobically referential informational system: A database of articles written by a chatbot so they can later be referenced as authoritative sources by the same chatbot, and maybe help train another one. (Google’s AI Overviews come to mind.) For now, it looks less like an alternative to Wikipedia that people will want to use than an attempt to delegitimize it.

As absurd and undignified as Grokipedia’s founder-centric origin story may be — How good could Wikipedia be if its page about me is so rude? — Elon Musk’s attempt to remake his own information environment is instructive and, if not exactly candid, usefully transparent (or at least poorly concealed). You won’t hear Musk joking about “his own fictionalized version of reality” in 2025 — now he prefers to speak in messianic terms about apocalyptic threats, no matter the subject. But Grokipedia, and Musk’s AI projects in general, invite us to see LLMs as powerful and intrinsically biased ideological tools, which, whatever you make of Grok’s example, they always are.

We know an awful lot about what Elon Musk thinks about the world, and we know that he wants his own products to align with his greater project. In Grok and Grokipedia, we get to see clearly what it looks like when particular ideologies are intentionally encoded into AI products that are then deployed widely and to openly ideological ends. We also get to recognize how thoroughly familiar parts of the spectacle are, as chatbots rehash the same pitches to audiences, and invite many of the same obvious criticisms, as newspapers, TV channels, and social-media platforms before them — when Fox offered its “fair and balanced” alternative to other cable networks, Mark Zuckerberg claimed to be returning to his company’s “free speech” roots, or the New York Times reminded us that the “truth” is hard, actually. Now, it’s AI companies winking as they tell us to trust them, engaging in flattering marketing, and giving in to paternalistic temptations without much awareness of how their predecessors’ decades of similar efforts helped lead the public to a state of profound institutional cynicism.

Joe Biden’s America… pic.twitter.com/7CLTCafNwM

— House Republicans (@HouseGOP) February 22, 2024

Anyway! Grokipedia was positioned at launch as an alternative product, and Musk generally likes to define xAI in opposition to its larger and less openly politicized competitors. That Musk’s claims about “truth,” factuality, and narrative are so clearly motivated by self-interest, though, actually helps draw attention to the ways his project is largely the same as OpenAI’s. To anyone outside Musk’s ideological sphere, his bid to create an enclosed, top-down informational environment seems either silly or sinister (see also the right’s characterization of the situation when Google’s attempts to optimize Gemini’s racial biases resulted in a machine that could only imagine non-white historical figures). But in its clumsy implementation and cringeworthy pitch, it still ends up being clearer about what it’s up to than claims like this, from an OpenAI announcement in early October:

ChatGPT shouldn’t have political bias in any direction. People use ChatGPT as a tool to learn and explore ideas. That only works if they trust ChatGPT to be objective… We created a political bias evaluation that mirrors real-world usage and stress-tests our models’ ability to remain objective… Based on this evaluation, we find that our models stay near-objective on neutral or slightly slanted prompts, and exhibit moderate bias in response to challenging, emotionally charged prompts.

The company was announcing the development of “an automated evaluation setup to continually track and improve objectivity over time,” using “approximately 500 prompts spanning 100 topics and varying political slants,” across “five nuanced axes of bias.” If the goal of Grok is to express a specific bias against prevailing progressive narratives by reflecting right-wing views — or just to stay in line with the values and priorities of its creator — well, that’s achievable. (It’s also something LLMs are well suited for as a technology.) In contrast, the goal OpenAI has set for itself is “objectivity,” in practice or at least reputation, which, for a chatbot tasked with talking about everything to everyone, really isn’t.

As novel and versatile as LLM-based chatbots are, their relationship to the outside world is recognizably and deeply editorial, like a newspaper or, more recently, an algorithmically sorted-and-censored social network. (It’s helpful to think of OpenAI’s “bias evaluation” process, or Grokipedia’s top-down reactionary political correctness, as less of a systemic audit than a straightforward edit.) What ChatGPT says about politics — or anything — is ultimately what the people who created it say it should say, or allow it to say; more specifically, human beings at OpenAI are deciding what neutral answers to those 500 prompts might look like and instructing their model to follow their lead. OpenAI’s incoherent appeal to objective neutrality is an effort to avoid this perception and one that anyone who runs a major media outlet or social-media platform knows won’t fool people for long.

OpenAI would probably prefer not to be evaluated by these punishing and polarized standards, so, as many other organizations have tried before, it’s claiming to exist outside them. On that task, I suspect ChatGPT will fail.

The 𝕏 recommendation system is evolving very rapidly. We are aiming for deletion of all heuristics within 4 to 6 weeks.Grok will literally read every post and watch every video (100M+ per day) to match users with content they’re most likely to find interesting. This should… https://t.co/HdKKgabRUN

— Elon Musk (@elonmusk) October 17, 2025

Luckily for OpenAI, ChatGPT’s future doesn’t hinge on creating a universal chatbot that everyone sees as unbiased — it’ll settle for being seen as useful, entertaining, or reasonable and trustworthy to enough people. Research papers and “bias evaluations” aside, the product and its users are veering away from shared experiences and into personalized, bespoke forms of interaction in which chatbots gradually profile their users and provide them with information that’s more relevant to their specific experiences or more sensitive to their personal preferences or both. Frequent chatbot users know that popular models can drift into sycophancy, which is a powerful and general sort of bias. They also know they can be commanded to inhabit different identities, political or otherwise (you can ask ChatGPT to talk to you like a dead French poststructuralist if you want or ask it to talk to you like Mr. Beast. Soon, reportedly, you’ll be able to ask it to pleasure you sexually). Still, for all their dazzling newness and versatility, AI chatbots are in many ways continuing the project started by late-stage social media, extending the logic of machine-learning recommendations into a familiar human voice. It’s not just that output neutrality is difficult to obtain for systems like this. It’s that they’re incompatible with the very concept.

In that sense, Grokipedia — like X and Grok — is also a warning. Sure, it’s part of an excruciatingly public example of one man’s gradual isolation from the world inside a conglomerate-scale system of affirming, adulatory, and ideologically safe feeds, chatbots, and synthetic media, a situation that would be funny if not for Musk’s desire and power to impose his vision on the world. (To calibrate this a bit, imagine predicting the “Wikipedia rewritten to be more conservative by Elon Musk’s anti-PC chatbot” scenario in the run-up to, say, his purchase of Twitter. It would have sounded insane, and you would have too.) But what Musk can build for himself now is something that consumer AI tools, including his, will soon allow regular people to build for themselves, or which will be constructed for them by default: A world mediated not just by publications or social networks but by omnipurpose AI products that assure us they’re “maximally truth-seeking” or “objective” as they simply tell us what we want to hear.


From Intelligencer - Daily News, Politics, Business, and Tech via this RSS feed