Editor’s Note: For those of you reading via email, I recommend opening this in a browser so you can use the Table of Contents. This is my longest newsletter - a 16,000-word-long opus - and if you like it, please subscribe to my premium newsletter. Thanks for reading!
In the last two years I’ve written no less than 500,000 words, with many of them dedicated to breaking both existent and previous myths about the state of technology and the tech industry itself. While I feel no resentment — I really enjoy writing, and feel privileged to be able to write about this and make money doing so — I do feel that there is a massive double standard between those perceived as “skeptics” and “optimists.”
To be skeptical of AI is to commit yourself to near-constant demands to prove yourself, and endless nags of “but what about?” with each one — no matter how small — presented as a fact that defeats any points you may have. Conversely, being an “optimist” allows you to take things like AI 2027 — which I will fucking get to — seriously to the point that you can write an entire feature about fan fiction in the New York Times and nobody will bat an eyelid.
In any case, things are beginning to fall apart. Two of the actual reporters at the New York Times (rather than a “columnist”) reported out last week that Meta is yet again “restructuring” its AI department for the fourth time, and that it’s considering “downsizing the A.I. division overall,” which sure doesn’t seem like something you’d do if you thought AI was the future.
Meanwhile, the markets are also thoroughly spooked by an MIT study covered by Fortune that found that 95% of generative AI pilots at companies are failing, and though MIT NANDA has now replaced the link to the study with a Google Form to request access, you can find the full PDF here, in the kind of move that screams “PR firm wants to try and set up interviews.” Not for me, thanks!
In any case, the report is actually grimmer than Fortune made it sound, saying that “95% of organizations are getting zero return [on generative AI].” The report says that “adoption is high, but transformation is low,” adding that “…few industries show the deep structural shifts associated with past general-purpose technologies such as new market leaders, disrupted business models, or measurable changes in customer behavior.”
Yet the most damning part was the “Five Myths About GenAI in the Enterprise,” which is probably the most wilting takedown of this movement I’ve ever seen:
*AI Will Replace Most Jobs in the Next Few Years → Research found limited layoffs from GenAI, and only in industries that are already affected significantly by AI. There is no consensus among executives as to hiring levels over the next 3-5 years.*Generative AI is Transforming Business → Adoption is high, but transformation is rare. Only 5% of enterprises have AI tools integrated in workflows at scale and 7 of 9 sectors show no real structural change. Editor’s note: Thank you! I made this exact point in February.*Enterprises are slow in adopting new tech → Enterprises are extremely eager to adopt AI and 90% have seriously explored buying an AI solution.*The biggest thing holding back AI is model quality, legal, data, risk → What’s really holding it back is that most AI tools don’t learn and don’t integrate well into workflows. Editor’s note: I really do love "the thing that’s holding AI back is that it sucks."The best enterprises are building their own tools → Internal builds fail twice as often.
These are brutal, dispassionate points that directly deal with the most common boosterisms. Generative AI isn’t transforming anything, AI isn’t replacing anyone, enterprises are trying to adopt generative AI but it doesn’t fucking work, and the thing holding back AI is the fact it doesn’t fucking work.This isn’t a case where “the enterprise” is suddenly going to save these companies, because the enterprise already tried, and it isn’t working.
An incorrect read of the study has been that the “learning gap” that makes these things less useful, when the study actually says that “…the fundamental gap that defines the GenAI divide [is that users resist tools that don’t adapt, model quality fails without context, and UX suffers when systems can’t remember.” This isn’t something you learn your way out of. The products don’t do what they’re meant to do, and people are realizing it.
Nevertheless, boosters will still find a way to twist this study to mean something else. They’ll claim that AI is still early, that the opportunity is still there, that we “didn’t confirm that the internet or smartphones were productivity boosting,” or that we’re in “the early days” of AI, somehow, three years and hundreds of billions and thousands of articles in.
I’m tired of having the same arguments with these people, and I’m sure you are too. No matter how much blindly obvious evidence there is to the contrary they will find ways to ignore it. They continually make smug comments about people “wishing things would be bad” or suggesting you are stupid — and yes, that is their belief! — for not believing generative AI is disruptive.
Today, I’m going to give you the tools to fight back against the AI boosters in your life. I’m going to go into the generalities of the booster movement — the way they argue, the tropes they cling to, and the ways in which they use your own self-doubt against you.
They’re your buddy, your boss, a man in a gingham shirt at Epic Steakhouse who won’t leave you the fuck alone, a Redditor, a writer, a founder or a simple con artist — whoever the booster in your life is, I want you to have the words to fight them with.
Table Of Contents
So, this is my longest newsletter ever, and I built it for quick reference - and, for the first time, gave you a Table of Contents.
What Is An AI Booster?AI Boosters Love Being Victims — Don’t Play Into It
BOOSTER QUIP: “You’re just being a hater for attention! Contrarians just do it for clicks and headlines!” AI Boosters Live In Vagueness — Make Them Get Specific
BOOSTER QUIP: “You Just Don’t Get It”BOOSTER QUIP: “AI Is Powerful, and Getting Exponentially More Powerful” Boosters Like To Gaslight — Don’t Let Them!Boosters Do Not Live In Reality, So Force Them To Do SoBOOSTER QUIP: AI will-BOOSTER QUIP: Agents will automate large parts-BOOSTER QUIP: We’re In The Early Days Of AI!
BOOSTER QUIP: Uhh, what I mean is that AI Is Like The Early Days Of The Internet!BOOSTER QUIP: Well, actually, sir! People Said Smartphones Wouldn’t Be Big!“The Early Days Of The Internet” Are Not A Sensible Comparison To Generative AI BOOSTER QUIP: Ahh, uh, what I mean is that we’re in the early days of AI! The other stuff you said was you misreading my vague statements somehow.BOOSTER QUIP: This Is Like The Dot Com Boom — Even If This All Collapses, The Overcapacity Will Be Practical For The Market Like The Fiber Boom Was!BOOSTER QUIP: Umm, five really smart guys got together and wrote AI 2027, which is a very real-sounding extrapolation that-ULTIMATE BOOSTER QUIP: The Cost Of Inference Is Coming Down! This Proves That Things Are Getting Cheaper!
NEWTON QUIP: “…Inference, which is when you actually enter a query into ChatGPT…” — FALSE! That’s Not What Inference Means!“…if you plotted the curve of how the cost [of inference] has been falling over time…” — FALSE! The Cost Of Inference Has Gone Up Over Time!I’m Not Done!The Cost Of Inference Went Up Because The Models Are Now Built To Burn More TokensCould The Cost Of Inference Go Down?Why Did This Happen? ULTIMATE BOOSTER QUIP: OpenAI and Anthropic are “just like Uber,” because Uber burned $25 billion over the course of 15 or so years, and is now profitable! This proves that OpenAI, a totally different company with different economics, will be fine!
AI Is Making Itself “Too Big To Fail,” Embedding Itself Everywhere And “Becoming Essential” — None Of These Things Are The CaseBut Ed! The Government!Uber Was and Is Useful, Which Eventually Made It EssentialWhat Is Essential About Generative AI?BOOSTER QUIP: Data centers are important economic growth vehicles, and are helping drive innovation and jobs throughout America! Having data centers promotes innovation, making OpenAI and AI data centers essential! BOOSTER QUIP: Uber burned a lot of money — $25 billion or more! — to get where it is today! ULTRA BOOSTER QUIP! AI Is Just Like Amazon Web Services — a massive investment that “took a while to go profitable” and “everybody hated Amazon for it”BOOSTER QUIP: [AI Company] Has $Xm Annualized Revenue!BOOSTER QUIP: [AI Company] Is In “Growth Mode” and Will “Pull The Profit Lever When It’s Time”BOOSTER QUIP: AGI Will-BOOSTER QUIP: I’m Hearing From People Deep Within The AI Industry That There’s Some Sort Of Ultra Powerful Models They’re Not Talking About BOOSTER QUIP: ChatGPT Is So Popular! 700 Million People Use It Weekly! It’s One Of The Most Popular Websites On The Internet! Its popularity proves its utility! Look At All The Paying Customers!ChatGPT (and OpenAI) Was Marketed Based On Lies
If I Was Wrong, We’d Have Real Use Cases By Now, And Better Metrics Than "Weekly Active Users"BOOSTER QUIP: OpenAI is making tons of money! That’s proof that they’re a successful company, and you are wrong, somehow!BOOSTER QUIP: When OpenAI Opens Stargate Abilene, It’ll Turn Profitable? BOOSTER (or well-meaning person) QUIP: Well my buddy’s friend’s dog’s brother uses it and loves it/Well I Heard This Happened, Well It’s Useful To Me.It Doesn’t Matter That You Have One Use Case, That Doesn’t Prove AnythingBOOSTER QUIP: Vibe Coding Is Changing The World, Allowing People Who Can’t Code To Make SoftwareI Am No Longer Accepting Half-Baked Arguments
What Is An AI Booster?
So, an AI booster is not, in many cases, an actual fan of artificial intelligence. People like Simon Willison or Max Woolf who actually work with LLMs on a daily basis don’t see the need to repeatedly harass everybody, or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I’ve found somebody to actually building things with LLMs, the less likely they are to emphatically argue that I’m missing out by not doing so myself.
No, the AI booster is symbolically aligned with generative AI. They are fans in the same way that somebody is a fan of a sports team, their houses emblazoned with every possible piece of tat they can find, their Sundays living and dying by the success of the team, except even fans of the Dallas Cowboys have a tighter grasp on reality.
Kevin Roose and Casey Newton are two of the most notable boosters, and — as I’ll get into later in this piece — neither of them have a consistent or comprehensive knowledge of AI. Nevertheless, they will insist that “everybody is using AI for everything” — a statement that even a booster should realize is incorrect based on the actual abilities of the models.
But that’s because it isn’t about what’s actually happening, it’s about allegiance. AI symbolizes something to the AI booster — a way that they’re better than other people, that makes them superior because they (unlike “cynics” and “skeptics”) are able to see the incredible potential in the future of AI, but also how great it is today, though they never seem to be able to explain why outside of “it replaced search for me!” and “I use it to draw connections between articles I write,” which is something I do without AI using my fucking brain.
Boosterism is a kind of religion, interested in finding symbolic “proof” that things are getting “better” in some indeterminate way, and that anyone that chooses to believe otherwise is ignorant.
I’ll give you an example. Thomas Ptacek’s “My AI Skeptic Friends Are All Nuts” was catnip for boosters — a software engineer using technical terms like “interact with Git” and “MCP,” vague charts, and, of course, an extremely vague statement that says hallucinations aren’t a problem:
I’m sure there are still environments where hallucination matters. But “hallucination” is the first thing developers bring up when someone suggests using LLMs, despite it being (more or less) a solved problem.
Is it?
Anyway, my favourite part of the blog is this:
A lot of LLM skepticism probably isn’t really about LLMs. It’s projection. People say “LLMs can’t code” when what they really mean is “LLMs can’t write Rust”. Fair enough! But people select languages in part based on how well LLMs work with them, so Rust people should get on that.
Nobody projects more than an AI booster. They thrive on the sense they’re oppressed and villainized after years of seemingly every outlet claiming they’re right regardless of whether there’s any proof. They sneer and jeer and cry constantly that people are not showing adequate amounts of awe when an AI lab says “we did something in private, we can’t share it with you, but it’s so cool,” and constantly act as if they’re victims as they spread outright misinformation, either through getting things wrong or never really caring enough to check.
Also, none of the booster arguments actually survive a thorough response, as Nik Suresh proved with his hilarious and brutal takedown of Ptacek’s piece.
There are, I believe, some people who truly do love using LLMs, yet they are not the ones defending them. Ptacek’s piece drips with condescension, to the point that it feels like he’s trying to convince himself how good LLMs are, and because boosters are eternal victims, he wrote them a piece that they could send around to skeptics and say “heh, see?” without being able to explain why it was such a brutal takedown, mostly because they can’t express why other than “well this guy gets it!”
One cannot be the big, smart genius that understands the glory and power of AI while also acting like a scared little puppy every time somebody tells them it sucks.
In fact, that’s a great place to start.
AI Boosters Love Being Victims — Don’t Play Into It
When you speak to an AI booster, you may get the instinct to shake them vigorously, or respond to their post by saying to do something with your something, or that they’re “stupid.” I understand the temptation, but you want to keep a head on a swivel — they thrive on victimisation.
I’m sorry if you are an AI booster and this makes you feel bad. Please reflect on your work and how many times you’ve referred to somebody who didn’t understand AI in a manner that suggested they were ignorant, or tried to gaslight them by saying “AI was powerful” while providing no actionable ways in which it is.
You cannot — and should not! — allow these people to act as if they are being victimized or “othered.”
BOOSTER QUIP: “You’re just being a hater for attention! Contrarians just do it for clicks and headlines!”
First and foremost: there are boosters at pretty much every major think tank, government agency and media outlet. It’s extremely lucrative being a booster. You’re showered with panel invites, access to executives, and are able to get headlines by saying how scared you are of the computer with ease. Being a booster is the easy path!
Being a critic requires you to constantly have to explain yourself in a way that boosters never have to.
If a booster says this to you, ask them to explain:
What they mean by “clicks” or “attention,” and how they think you are monetizing it.How this differs in its success from, say, anybody who interviews and quotes Sam Altman or whatever OpenAI is up to.Why do they believe your intentions as a critic are somehow malevolent, as opposed to those literally reporting what the rich and powerful want them to.
There is no answer here, because this is not a coherent point of view. Boosters are more successful, get more perks and are in general better-treated than any critic.
AI Boosters Live In Vagueness — Make Them Get Specific
Fundamentally, these people exist in the land of the vague. They will drag you toward what’s just on the horizon, but never quite define what the thing that dazzles you will be, or when it will arrive.
Really, their argument comes down to one thought: you must get on board now, because at some point it’ll be so good you’ll feel stupid for not believing something that kind of sucks wouldn’t be really good.
If this line sounds familiar, it’s because you’ve heard it a million times before, most notably with crypto.
They will make you define what would impress you, which isn’t your job, in the same way finding a use case for them isn’t your job. In fact, you are the customer!
BOOSTER QUIP: “You Just Don’t Get It”
Here’s a great place to start: say “that’s a really weird thing to say!” It is peculiar to suggest that somebody doesn’t get how to use a product, and that we, as the customer, must justify ourselves to our own purchases. Make them justify their attitude.
Just like any product, we buy software to serve a need. This is meant to be artificial *intelligence* — why is it so fucking stupid that I have to work out why it’s useful? The answer, of course, is that it has no intellect, is not intelligent, and Large Language Models are being pushed up a mountain by a cadre of people who are either easily impressed or invested — either emotionally or financially — in its success due to the company they keep or their intentions for the world.
If a booster suggests you “just don’t get it,” ask them to explain:
What you are missing.What, specifically, it is that is so life-changing about this product, **based on your own experience, not on anecdotes from others.**What use cases are truly “transformative” about AI.
Their use cases will likely be that AI has replaced search for them, that they use it for brainstorming or journaling, proof-reading an article, or looking through a big pile of their notes (or some other corpus of information) and summarizing it or pulling out insights.
BOOSTER QUIP: “AI Is Powerful, and Getting Exponentially More Powerful”
If a booster refers to AI “being powerful” and getting “more powerful,” ask them:
What powerful means. In the event that they mention benchmarks, ask them how those benchmarks apply to real-world scenarios. If they bring up SWE Bench, the standard benchmark for coding, ask them if they can code, and if they cannot, ask them for another example.In the event that they mention “reasoning,” ask them to define it.Once they have defined it, ask them to explain in plain English what reasoning **allows you to do on a use-case level, *not how it works.***They will likely bring up the gold medal performance that OpenAI’s model got on the Math Olympiad. Ask them why they haven’t released the model.Ask them what actual, practical use cases this “success” has opened up.What use cases have arrived as a result of models becoming more powerful. If they say vague things like “oh, in coding” and “oh, in medicine,” ask them to get specific.What new products have arrived as a result. If they say “coding LLMs,” they will likely add that this is “replacing coders.” Ask them if they believe software engineering is entirely writing code.
Boosters Like To Gaslight — Don’t Let Them!
The core of the AI booster’s argument is to make you feel bad.
They will suggest you are intentionally not liking A.I. because you’re a hater, or a cynic, or a Luddite. They will suggest that you are ignorant for not being amazed by ChatGPT.
To be clear, anyone with a compelling argument doesn’t have to make you feel bad to convince you. The iPhone didn’t need a fucking marketing campaign to explain why one device that can do a bunch of things you already find useful was good.
You don’t have to be impressed by ANYTHING by default, and any product — especially software — designed to make you feel stupid for “not getting it” is poorly designed. ChatGPT is the ultimate form of Silicon Valley Sociopathy — you must do the work to find the use cases, and thank them for being given the chance to do so.
A.I. is not even good, reliable software! It resembles the death of the art of technology — inconsistent and unreliable by definition, inefficient by design, financially ruinous, and ADDS to the cognitive load of the user by requiring them to be ever-vigilant.
So, here’s a really easy way to deal with this**: if a booster ever suggests you are stupid or ignorant, ask them why it’s necessary to demean you to get their point across! Even if you are unable to argue on a technical level, make them explain why the software itself can’t convince you.**
Boosters Do Not Live In Reality, So Force Them To Do So
Boosters will do everything they can to pull you off course.
If you say that none of these companies make money, they’ll say it’s the early days. If you say AI companies burn billions, they’ll say the cost of inference is coming down. If you say the industry is massively overbuilding, they’ll say that this is actually just like the dot com boom and that the infrastructure will be picked up and used in the future. If you say there are no real use cases, they’ll say that ChatGPT has 700 million weekly users.
Every time there’s the same god damn arguments, so I’ve sat down and written as many of them as I can think of. Print this and feed it to your local booster today.
Your Next Line Is…
BOOSTER QUIP: AI will-
Anytime a booster says “AI will,” tell them to stop and explain what AI can do, and if they insist, ask them both when to expect the things they’re talking about, and if they say “very soon,” ask them to be more specific. Get them to agree to a date, then call them on that date.
BOOSTER QUIP: Agents will automate large parts-
There’s that “will” bullshit again. Agents don’t work! They don’t work at all. The term “agent” means, to quote Max Woolf, “a workflow where the LLM can make its own decisions, [such as in the case of] web search [where] the LLM is told “you can search the web if you need to” then can output “I should search the web” and do so.”
Yet “agent” has now become a mythical creature that means “totally autonomous AI that can do an entire job.” if anyone tells you “agents are…” you should ask them to point to one. If they say “coding,” please demand that they explain how autonomous these things are, and if they say that they can “refactor entire codebases,” ask them what that means, and also laugh at them.
Here’s a comprehensive rundown, but here’s a particularly important part:
Not only does Salesforce not actually sell “agents,” its own research shows that agents only achieve around a 58% success rate on single-step tasks, meaning, to quote The Register, “tasks that can be completed in a single step without needing follow-up actions or more information.” On multi-step tasks — so, you know, most tasks — they succeed a depressing 35% of the time.
Long story short, agents are not autonomous, they do not replace jobs, they cannot “replace coders,” they are not going to do so because probabilistic models are a horrible means of taking precise actions, and almost anyone who brings up agents as a booster is either misinformed or in the business of misinformation.
BOOSTER QUIP: We’re In The Early Days Of AI!
Let’s start with a really simple question: what does this actually mean?
BOOSTER QUIP: Uhh, what I mean is that AI Is Like The Early Days Of The Internet!
In many cases, I think they’re referring to AI as being “like the early days of the internet.”
“The early days of the internet” can refer to just about anything. Are we talking about dial-up? DSL? Are we talking about the pre-platform days when people accessed it via Compuserve or AOL? Yes, yes, I remember that article from Newsweek, I already explained it here:
In any case, one guy saying that the internet won’t be big doesn’t mean a fucking thing about generative AI and you are a simpleton if you think it does. One guy being wrong in some way is not a response to my work. I will crush you like a bug.
If your argument is that the early internet required expensive Sun Microsystems servers to run, Jim Covello of Goldman Sachs addressed that by saying that the costs "pale in comparison," adding that we also didn’t need to expand our power grid to build the early Web.
BOOSTER QUIP: Well,actually, sir! People Said Smartphones Wouldn’t Be Big!
This is a straight-up lie. Sorry! Also, as Jim Covello noted, there were hundreds of presentations in the early 2000s that included roadmaps that accurately fit how smartphones rolled out, and that no such roadmap exists for generative AI.
The iPhone was also an immediate success as a thing that people paid for, with Apple selling four million units in the space of six months. Hell, in 2006 (the year before the iPhone launch), there was an estimated 17.7 million worldwide smartphone shipments (mostly from BlackBerry and other companies building on Windows Mobile, with Palm vacuuming up the crumbs), though to be generous to the generative AI boosters, I’ll disregard those.
“The Early Days Of The Internet” Are Not A Sensible Comparison To Generative AI
The original Attention Is All You Need paper — the one that kicked off the transformer-based Large Language Model era — was published in June 2017. ChatGPT launched in November 2022.
Nevertheless, if we’re saying “early days” here, we should actually define what that means. As I mentioned above*, people paid for the iPhone immediately, despite it being a device that was completely and utterly new.* While there was a small group of consumers that might have used similar devices (like the iPAQ), this was a completely new kind of computing, sold at a premium, requiring you to have a contract with a specific carrier (Cingular, now known as AT&T).
Conversely, ChatGPT’s “annualized” revenue in December 2023 was $1.6 billion (or $133 million a month), for a product that had, by that time, raised over $10 billion, and while we don’t know what OpenAI lost in 2023, reports suggest it burned over $5 billion in 2024.
Big tech has spent over $500 billion in capital expenditures in the last 18 months, and all told — between investments of cloud credits and infrastructure — will likely sink over $600 billion by year’s-end.
The “early days” of the internet were defined not by its lack of investment or attention, but by its obscurity. Even in 2000 — around the time of the dot-com bubble — only 52% of US adults used the internet, and it would take another 19 years for 90% of US adults to do so. These early days were also defined by its early functionality. The internet would become so much more because of the things that hyper-connectivity allowed us to do, and both faster internet connections and the ability to host software in the cloud would change, well, everything. We could define what “better” would mean, and make reasonable predictions about what people could do on a “better” internet.
Yet even in those early days, it was obvious why you were using the internet, and how it might grow from there. One did not have to struggle to explain why buying a book online might be useful, or why a website might be a quicker reference than having to go to a library, or why downloading a game or a song might be a good idea. While habits might have needed adjusting, it was blatantly obvious what the value of the early internet was.
It’s also unclear when the early days of the internet ended. Only 44% of US adults had access to broadband internet by 2006. Were those the early days of the internet?
The answer is “no,” and that this point is brought up by people with a poor grasp of history and a flimsy attachment to reality. The early days of the internet were very, very different to any associated tech boom since, and we need to stop making the comparison.
The internet also grew in a vastly different information ecosystem. Generative AI has had the benefit of mass media — driven by the internet! — along with social media (and social pressure) to “adopt AI” for multiple years.
BOOSTER QUIP: Ahh, uh, what I mean is that we’re in the early days of AI! The other stuff you said was you misreading my vague statements somehow.
We Are Not In The Early Days Of Generative AI, And Anyone Using This Argument Is Either Ignorant Or Intentionally Deceptive
According to Pew, as of mid-June 2025, 34% of US adults have used ChatGPT, with 79% saying they had “heard at least a little about it.”
Furthermore, ChatGPT has always had a free version. On top of that, a study from May 2023 found that over 10,900 news headlines mentioned ChatGPT between November, 2022 and March, 2023, and a BrandWatch report found that in the first five months of its release, ChatGPT received over 9.24 million mentions on social media.
Nearly 80% of people have heard of ChatGPT, and over a quarter of Americans have used it.
If we’re defining “the early days” based on consumer exposure, that ship has sailed.
If we’re defining “the early days” by the passage of time, it’s been 8 years since Attention Is All You Need, and three since ChatGPT came out.
While three years might not seem like a lot of time, the whole foundation of an “early days” argument is that in the early days, things do not receive the venture funding, research, attention, infrastructural support or business interest necessary to make them “big.”
In 2024, nearly 33% of all global venture funding went to artificial intelligence, and according to The Information, AI startups have raised over $40 billion in 2025 alone, with Statista adding that AI absorbed 71% of VC funding in Q1 2025.
These numbers also fail to account for the massive infrastructure that companies like OpenAI and Anthropic don’t have to pay for. The limitations of the early internet were two-fold:
The fiber-optic cable boom that led to the Fiber Optic bubble bursting when telecommunications companies massively over-invested in infrastructure, which I will get to shortly.The lack of scalable cloud infrastructure to allow distinct apps to be run online, a problem solved by Amazon Web Services (among others).
In generative AI’s case, Microsoft, Google, and Amazon have built out the “fiber optic cables” for Large Language Models. OpenAI and Anthropic have everything they need. They have (even if they say otherwise) plenty of compute, access to the literal greatest minds in the field, the constant attention of the media and global governments, and effectively no regulations or restrictions stopping them from training their models on the works of millions of people, or destroying our environment.
They have already had this support. OpenAI was allowed to burn half a billion dollars on a training run for GPT-4.5 and 5. If anything, the massive amounts of capital have allowed us to massively condense the time in which a bubble goes from “possible” to “bursting and washing out a bunch of people,” because the tech industry has such a powerful follower culture that only one or two unique ideas can exist at one time.
The “early days” argument hinges on obscurity and limited resources, something that generative AI does not get to whine about. Companies that make effectively no revenue can raise $500 million to do the same AI coding bullshit that everybody else does.
In simpler terms, these companies are flush with cash, have all the attention and investment they could possibly need, and are still unable to create a product with a defined, meaningful, mass-market use case.
In fact, I believe that thanks to effectively infinite resources, we’ve speed-run the entire Large Language Model era, and we’re nearing the end. These companies got what they wanted.
BOOSTER QUIP: This Is Like The Dot Com Boom — Even If This All Collapses, The Overcapacity Will Be Practical For The Market Like The Fiber Boom Was!
Bonus trick: ask them to tell you what “the fiber boom” was.
So, a little history.
The “fiber boom” began after the telecommunications act of 1996 deregulated large parts of America’s communications infrastructure, creating a massive boom — a $500 billion one to be precise, primarily funded with debt:
In one sense, explaining what happened to the telecom sector is very simple: the growth in capacity has vastly outstripped the growth in demand. In the five years since the 1996 bill became law, telecommunications companies poured more than $500 billion into laying fiber optic cable, adding new switches, and building wireless networks. So much long-distance capacity was added in North America, for example, that no more than two percent is currently being used. With the fixed costs of these new networks so high and the marginal costs of sending signals over them so low, it is not a surprise that competition has forced prices down to the point where many firms have lost the ability to service their debts. No wonder we have seen so many bankruptcies and layoffs.
This piece, written in 2002, is often cited as a defense against the horrifying capex associated with generative AI, as that fiber optic cable has been useful for delivering high-speed internet. Useful, right? This period was also defined by a gluttony of over-investment, ridiculous valuations and outright fraud.
In any case, this is not remotely the same thing and anyone making this point needs to learn the very fucking basics of technology.
[Content truncated due to length…]


