Photo: Nathan Laine/Bloomberg/Getty Images
In the last week, two prominent voices in AI attempted to make some adjustments to the AI discourse. Asked if “investors as a whole are overexcited about AI,” OpenAI’s Sam Altman answered “yes” to a group of gathered reporters, according to the Verge. Then he used that word. “When bubbles happen, smart people get overexcited about a kernel of truth,” he said. “If you look at most of the bubbles in history, like the tech bubble, there was a real thing. Tech was really important. The internet was a really big deal. People got overexcited.”
It would be a mistake to read this as a major departure from Altman’s wide-ranging and often inconsistent public musings about AI: His intention here, after a rocky rollout for his company’s latest model, GPT-5, seems to be to situate OpenAI outside of a possible bubble, implying that his company has more in common with dot-com Amazon than with Pets.com or Worldcom. Still, it’s significant that Altman broached the subject, and there are signs that a generally bullish Wall Street has taken notice: In a story about so-called “disaster puts,” Bloomberg reports that “options traders are increasingly nervous about a plunge in technology stocks in the coming weeks and are grabbing insurance to protect themselves from a wipeout.”
In another notable shift, former Google CEO Eric Schmidt, who has in recent years become an influential and provocatively severe figure in the AI world, co-authored a piece for the New York Times with tech analyst Selina Xu arguing that Silicon Valley “needs to stop obsessing over superhuman AI.” Just a few months ago, in another op-ed, Schmidt argued that “even without a consensus about a precise definition, the contours of an AGI future are beginning to take shape,” suggesting that “artificial general intelligence” could “usher in a new Renaissance.” This week, he and Xu urged their peers to rethink such messaging and suggested that narratives like this actually gave them “pause”:
It is uncertain how soon artificial general intelligence can be achieved. We worry that Silicon Valley has grown so enamored with accomplishing this goal that it’s alienating the general public and, worse, bypassing crucial opportunities to use the technology that already exists. In being solely fixated on this objective, our nation risks falling behind China, which is far less concerned with creating A.I. powerful enough to surpass humans and much more focused on using the technology we have now.
As an insider effort to regulate the frequently wild messaging coming from the AI industry, this is surprising rhetoric. It is consistent with (and actually cites) critical work arguing that while recent developments in AI represent a major technology with great disruptive potential, current models might not be on the cusp of rendering the entire economy obsolete and/or wiping out or subjugating humanity once machines have established themselves as a superior species. The argument that stories about AGI are alienating, distracting, and disempowering — no objection here! — is likewise sort of wild to hear coming from Schmidt, who just this year warned that AI models are beginning “recursive self-improvement” and are already “learning how to plan, and they don’t have to listen to us anymore.”
But political scientist and AI theorist Henry Farrell makes a case for the deeper significance of such a pivot from Schmidt, who last year co-authored a book with Henry Kissinger about AI, and who Farrell has argued could be “the most influential American foreign policy thinker of the early 21st century.” In short, Schmidt was instrumental in forming the “mind meld” between Silicon Valley and national-security policymakers, which held that self-improving AI, and AGI, represented a fast-approaching “inflection point” in a race between the U.S. and China, the winner of which would gain a “technological advantage that would secure its long-term dominance.” But, he writes, “if the AGI bet is a bad one, then much of the rationale for this consensus falls apart. And that is the conclusion that Eric Schmidt seems to be coming to.”
The other argument in Schmidt and Xu’s op-ed — that in China, where “scientists and policymakers aren’t as A.G.I.-pilled as their American counterparts,” citizens are far more approving of AI, which they understand as a useful tool rather than an abstract threat — makes clear that this is intended as a narrative departure. “Many of the purported benefits of A.G.I. — in science, education, health care and the like — can already be achieved with the careful refinement and use of powerful existing models,” they write. “The belief in an A.G.I. or superintelligence tipping point flies in the face of the history of technology, in which progress and diffusion have been incremental.”
Schmidt and Altman have distinct goals and motives for weighing in like this, of course, and in isolation each statement could be understood as a bit of cautious hedging. But taken together — and combined with Altman’s recent assertions that AGI is “not a super-useful term” — their not-so-subtle repositioning could suggest a bit of a vibe shift coming among the tech elite (or at least a heightened awareness of how they sound to everyone without a financial stake in the AI boom). AI is still a huge deal, they’re suggesting. But after years of issuing one sort of warning, they’re now voicing another: Let’s not get ahead of ourselves.
From Intelligencer - Daily News, Politics, Business, and Tech via this RSS feed