Here’s what’s strange about Section 230 of the Communications Decency Act, the law that made the open internet possible: Both sides of the traditional political spectrum hate it. But for opposite reasons. That, alone, should highlight that something is wrong in their analysis.
Republicans hate it because they say it lets websites censor conservative speech. Democrats hate it because they say it lets websites host dangerous disinformation.
Read those two sentences again.
One side is furious that platforms can moderate. The other side is furious that platforms don’t have to moderate. Both sides are attacking the same 26-word provision of a 30-year-old law—and if you understand why their complaints are contradictory, you understand what Section 230 actually does.
This weekend marked the 30th anniversary of the Telecommunications Act of 1996, which contained the mostly unconstitutional Communications Decency Act, which inexplicably contained Section 230. (If you want the full history, I hosted a podcast series about it last year.) And after three decades, there’s now a concerted, bipartisan effort to kill it—by people who either don’t understand what the law does, or understand perfectly well and see its destruction as a path to controlling the flow of information online.
Years back I wrote a piece debunking many of the myths about 230. The myths have only multiplied since.
Both critiques, stripped of their partisan framing, are about the same thing: who gets to control what speech appears where. And Section 230’s answer to both sides is the same: pound sand.
That’s what the law actually does. It doesn’t mandate or prohibit “censorship.” It doesn’t require neutrality (that’s a myth that won’t die). It simply says: if you have a problem with content online, take it up with the person who created it, not the service hosting it. Platforms can moderate however they see fit—aggressively, lightly, inconsistently, politically—and they won’t face ruinous liability for those choices. They also won’t face liability for what they don’t remove.
This is what makes an open internet possible. Without that protection, no service would risk hosting user content at all. Or if they did, every moderation decision would require a lawyer’s sign-off, optimizing for liability reduction rather than healthy communities. The people who actually understand how to build good online spaces—trust and safety professionals, community managers—would be overruled by legal departments playing defense.
Almost all criticism of Section 230 is not actually about Section 230. It’s about one of two things: (1) not liking something in society that manifests online, and incorrectly believing that changing the law will somehow fix it, or (2) wanting control over what content platforms host.
So what happens if critics get their way? There’s a lobbying campaign right now claiming that reforming or repealing 230 will lead to “greater responsibility from tech companies.”
This is exactly backwards.
Without 230’s protections, smaller platforms—the ones that might actually compete with the giants—get destroyed first. They can’t afford the vexatious lawsuits. They can’t afford buildings full of lawyers. The big players survive, and their market position gets locked in even harder.
And those surviving giants won’t become more responsible. They’ll become less. Any competent legal team will tell them: the less you know, the less liability you have. Don’t proactively look for harmful content. Don’t research how your platform causes harm—those findings would be exhibit A in every lawsuit. Just stick your head in the sand and let the lawyers handle the subpoenas.
This is how liability regimes work, and America’s exceptionally litigious legal culture makes these incentives even stronger. The critics either don’t understand this or don’t care, because their actual goal was never “responsibility.” It was control. That they’ve duped some tech critics into thinking it’s about “responsibility” or “safety” doesn’t change that. Because it won’t improve responsibility or safety. But it will give politicians tremendous power over online speech.
Thirty years ago, a 26-word provision buried in a mostly unconstitutional law kicked off the open internet. It let anyone build a platform, host a community, create something new—without needing permission from lawyers or regulators first. That era is now under direct attack by people who misrepresent what Section 230 does and misrepresent what killing it would mean.
The open web turned 30 this weekend. The bipartisan campaign to kill it was never about responsibility or safety, it was always about control. Whether the open web sees age 31 comes down to 26 words that tell both sides to pound sand.
From Techdirt via this RSS feed


