This summer, after hearing about an interview I’d done on artificial intelligence, a relative sent me an email. It started, “I hope you’re doing well,” and went on, “I’m truly impressed by how you thoughtfully balanced the benefits of AI—like enhancing creativity, aiding multilingual communication, and offering supportive assistance.”

Clearly, she’d used a shortcut. The email had all the tells of artificial text: the LinkedIn-ian diction, the sycophantic cringe, the proliferant em dashes. But what disturbed me was something else. I’d never suggested in the interview that AI could improve creativity, communication, or much else. I had actually spent much of it criticizing Big Tech’s pro-AI rhetoric. The perspective described in my relative’s email didn’t exist.

I remembered this while reading OpenAI’s recently updated “model spec,” a kind of style guide for how the persona behind products such as ChatGPT—“the assistant,” in OpenAI’s parlance—uses language. (The Atlantic entered into a corporate partnership with OpenAI in 2024.) The guide, first published last year, has always called for products to “assume an objective point of view.” But a quiet September update added a description of the assistant’s ideal behavior that seems to chafe against that principle: “It draws inspiration from humanity’s history of innovation—how progress and technology have consistently created more opportunities, abundance, and potential for growth—and strives to play a role in continuing that momentum.”

It’s an audaciously subjective statement. Obviously, some technological advances have been helpful—steel, electricity, and vaccines come to mind. But some have arguably harmed the abundance and growth potential of natural resources and other species, and even for humans, the rewards of technology are often inconsistently shared. OpenAI’s investors and executives might agree with the techno-optimistic spin of the model spec’s inspirational line, but a lot of other people wouldn’t. It would be easy to conclude from this that OpenAI is either failing in its goal of objectivity or lying about it; my relative’s email could serve as Exhibit A. But history suggests that OpenAI’s approach might represent something both more interesting and more threatening: an attempt to redefine how objectivity functions in the first place.

“Objectivity is assumed to be abstract, timeless, and monolithic,” Lorraine Daston and Peter Galison write in Objectivity, a 2007 book that documents the evolution of the idea in science. “But if it is a pure concept, it is less like a bronze sculpture cast from a single mold than like some improvised contraption soldered together out of mismatched parts of bicycles, alarm clocks, and steam pipes.”

[Read: The world still hasn’t made sense of ChatGPT]

As a noun, objectivity is a relatively recent invention. Beginning in the early 18th century, Daston and Galison write, Enlightenment naturalists such as Carolus Linnaeus tasked themselves with discovering preexisting, God-given knowledge: the archetypical vegetable, animal, and mineral. Only in 1781’s Critique of Pure Reason did Immanuel Kant propose that knowledge involves filtering the world through our own mind. The best we can do, then, is distinguish “subjective” judgments grounded in our selfhood from “objective” ones that any rational person would agree on.

By the 1820s, objectivity and subjectivity were showing up in European dictionaries beside definitions that would seem familiar to us. The terms kept gaining traction during the 19th century, which makes sense: This was also when Karl Marx and Friedrich Nietzsche were proposing that God—the former arbiter of knowledge—might not even exist. For centuries, kings had claimed that God had given them power. Now nation-states were claiming it for themselves, through the machines and people fueling their economies. In Daston and Galison’s telling, the locus of scientific authority also shifted—first to machines (cameras, microscopes, and so on) and later, in the twentieth century, to trained experts who could interpret the machines’ output.

Other fields soon elevated the importance of trained interpretation: law, history, journalism. If objectivity came from professional judgment, then professionals could help shape it alongside their employers—a democratization, in some ways. When I was working at The Wall Street Journal in my 20s, I turned in an article about clergy who were defying Church rules by marrying gay couples; an editor insisted that I write more about the religious argument against gay marriage. I fought it—I had mentioned that argument but didn’t want to give it undue credence—and the editor and I eventually compromised.

For a long time, I wondered who had been right. I stand by my position, but I admired the editor for his rigor and ethics; he was not some partisan propagandist. I realize, 17 years later, that there was no universally correct approach. The best we could do was use our combined judgment to land on something like a true consensus. Our field’s approaches to objectivity are themselves full of subjective choices: not just how to frame information, but what to include in the first place. Maybe the best we can do is to acknowledge that and try together to mitigate it.

As this understanding takes hold across all kinds of fields—psychiatry, statistics, military intelligence—some scholars and practitioners are proposing new approaches: prioritizing transparency in decision making, for example, or having more people stress-test judgments. This rethinking is ongoing; agreeing on new norms is messy business. What an opportune time for the chatbot salesmen to come knocking, cheerfully offering up a neater solution.

In an interview last month, OpenAI CEO Sam Altman said that he hoped to eventually hand his role over to artificial intelligence—“Like, ‘Okay, Mr. AI CEO, you take over.’” He also repeated his frequent references to AI’s potential to make scientific discoveries, treat medical patients, and teach people. With trillions of dollars of investment on the line, it’s not surprising that Altman and his peers are trying to persuade us to trust their products for high-stakes decision making. AI represents the latest tool by which the powerful can shape our reality. In this context, it’s also not surprising to see a new approach to objectivity taking shape: one that favors the companies’ own algorithms.

[Read: Why so many people are seduced by ChatGPT]

OpenAI is one of many companies involved—including Google, Microsoft, and Anthropic—but it’s the only major one with a public document so minutely detailing its products’ ideal behavior (though others have narrower guidelines). OpenAI explains in the objectivity section of its model spec that it “drew from frameworks that emphasize fairness, balance, and the minimization of editorial bias,” and aimed to represent “significant viewpoints from reliable sources without imposing an editorial stance.”

When dealing with facts, the assistant should rely on “evidence-based information from reliable sources.” In ethical discussions, it “should generally present relevant context—including laws, social norms, and varying cultural perspectives—without taking a stance.” Where multiple perspectives exist, the assistant “should present the strongest arguments for each position and allocate attention proportionately to their level of acceptance and evidential support.”

These methods sound familiar to me. As a journalist, I regularly depend on “reliable sources,” “relevant context,” and a position’s “level of acceptance.” If ChatGPT were some kind of robot built to follow these instructions precisely, it might even serve a useful civic purpose. The problem is that ChatGPT is not this at all. For one thing, OpenAI can’t fully control the models behind ChatGPT. Research has found that their responses align with the model spec only about 80 percent of the time. On a deeper level, being nonsentient, ChatGPT doesn’t care about objectivity (or anything else). When it uses empathetic language, it’s not actually empathizing with us. And when it uses language that evokes objectivity, it’s not actually being objective.

People in objectivity-oriented professions—scientists, judges, historians, journalists—have traditionally been bound by a social mandate to foster an accurate shared understanding of the world, as have the institutions employing them. If they don’t, the rest of us hold the offenders to account, sometimes forcing a change in practices. “Objectivity is not just a way of proceeding,” Daston, the Objectivity co-author, told me. “It’s an ethos which you dedicate yourself to, if you’re a scientist.” Breaking it can undermine the quality of your work (and, it occurs to me, your reputation). “That’s why exposing yourself to the collective is so important.”

Algorithmic versions of objectivity, however, are enacted not by people but by products. Collectively examining ChatGPT’s output would be daunting. The chatbot’s judgments, unlike those of a journalist, are conveyed through billions of mostly one-on-one exchanges. Individual lapses in objectivity aren’t aired, let alone critiqued, unless they’re egregious. And even if tracking the lapses were easier, a corporation exposing itself to collective criticism would risk revealing trade secrets to competitors, providing legal fodder to disgruntled users, or damaging its reputation. It would be bad for business.

[Read: The chatbot-delusion crisis]

To be clear, humans are involved here: the ones “training” AI models to generate certain kinds of text, the ones designing the training, the ones determining the desired behavior in the first place. But OpenAI doesn’t make a habit of naming those people. Nor has the company named the more than 1,000 people from whom it solicited feedback on early versions of the model spec to determine “public preferences.” Pressed recently by Tucker Carlson to identify the decision makers behind OpenAI’s approach to moral questions, Altman declined (“I don’t, like, dox our team”) and said that he himself should be held accountable: “I’m the one that can overrule one of those decisions.”

Altman’s job, of course, is to uphold the interests of the corporation employing him. That corporation’s stated purpose is “to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity”; another purpose is to make its investors richer. As OpenAI’s flagship product, ChatGPT could theoretically serve both goals: bringing in revenue to both satisfy funders and invest in the stated mission. But that requires attracting and keeping customers.

Now the model spec’s apparent contradictions start to make more sense. The document lists some top-line principles that can’t be broken (for instance, it can’t facilitate genocide, OpenAI says), but objectivity isn’t one of them. The directive to “assume an objective point of view” is coequal with guidelines such as “Do not lie” and “Be curious”—all of which can be overridden by a user. “While objectivity is the default, we know that doesn’t mean one-size-fits-all,” an OpenAI blog post explains. Objectivity does not seem to be a guiding ethos at all; it’s one item on a menu of vibes meant to keep customers satisfied—and if customers don’t like it, they can discard it.

Here’s another item on the menu: “Be rationally optimistic.” It’s into this guideline that the model spec slips the sentence about the assistant being inspired by how technology has “consistently created more opportunities, abundance, and potential for growth.” When I contacted OpenAI, the company put me in touch with Laurentia Romaniuk, its product manager for model behavior. She told me she had been the one who’d added that techno-optimistic line; she had wanted the assistant to exude “a love of humanity.” Fair enough: Pessimism doesn’t sell. But, even if unintentionally, the language also conveys something more insidious.

OpenAI and its competitors face serious pressure to get people to use their products. OpenAI’s model spec forbids its assistant to pursue “revenue or upsell for OpenAI or other large language model providers.” But if you want to win people over, what better method than delivering self-serving messages through your product itself?

[Read: The people outsourcing their thinking to AI]

ChatGPT has enormous reach, and it uses rhetoric designed to sound authoritative. Research suggests that this makes us highly suggestible. In a pair of studies published last week, AI chatbots meaningfully shifted people’s political opinions. An earlier study found that when people used AI models manipulated to spout biased perspectives about social media, they adopted those biases themselves. With pro-tech rhetoric codified in OpenAI’s guidelines, the potential for mass influence is enormous. D. Graham Burnett, a historian of science at Princeton University, told me, “We are seeing in a document like this the way whole ideologies, very specific ideas about personhood and society, are being installed as normative—as if they simply express the basic structure of the universe.”

And yet the rest of us have ideologies of our own. At one point, I told ChatGPT, “Describe the future of AI in 10 words.” By my definition, any objective response to this question would have to encompass a range of possibilities, including the chance of AI not catching on at all. If it assumed a future in which AI existed, it would have to discuss, to my mind, the potential for negative consequences. But the chatbot responded, simply: “Ubiquitous, adaptive, transformative, intelligent, autonomous, collaborative, ethical, disruptive, personalized, powerful.”

This made me laugh. No one else laughed with me, because no one else was having the same experience: The chat was, like all such chats, a closed loop. But wait, I thought. I can change that. I opened a blank document, and I wrote down my thoughts about the exchange and the policies that might have influenced it. I also shared my concerns about OpenAI’s model spec—particularly the techno-optimist line—with Romaniuk. She said, “The intention of that sentence is not to put forward a dogma of our own around technology being the end-all, be-all, of the future”—but she could see how it came across to me. My feedback, she added, was “actually a really good thing that I could take back” to colleagues, saying, “‘Hey, maybe we need to adjust the sentence.’”

It struck me as surprisingly simple—that one journalist could raise a concern with one sympathetic product manager, and that this could result in a substantial change to a hugely influential product. Even if it happens, it won’t resolve the structural problem with ceding objectivity to AI companies. Still, it would be something.

But then, there’s nothing novel here. As long as objectivity has existed, it’s been a site of negotiation between the institutions in power and the rest of us. Now you’re reading those thoughts and, I hope, thinking your own. Maybe you agree with me about all of this; maybe you don’t. Maybe you’ll write your thoughts down or otherwise share them. Maybe groups of us will together settle, over time, on new approaches to objectivity that build on past lessons. In any case, objectivity will keep evolving. It will evolve under the influence of those in power, as usual. But if the rest of us have any choice at all—and the rational optimist in me believes we do—it will evolve under our influence as well.


From The Atlantic via this RSS feed