If you measure only in dollars (and not in dignity), YouTube got a pretty good deal. This week, the Google-owned platform paid $24.5 million to settle a lawsuit brought by President Donald Trump after the company suspended his channel six days after the January 6 riot at the Capitol. At the time, YouTube said it was “concerned about the ongoing potential for violence.” (Trump’s account was eventually reinstated in March 2023.) The terms of the settlement will direct $22 million to the Trust for the National Mall, a nonprofit group that is raising money to finance an addition to the White House. Most creators are lucky if they get a gold plaque from YouTube; Trump’s getting a new ballroom.

This is just the latest example of major tech companies bowing to Trump. Earlier this year, Meta and X settled similar lawsuits with Trump over suspending his accounts, paying $25 million and $10 million, respectively. These three companies alone have collectively paid Trump and his associates $59.5 million for the sin of enforcing the rules of their own privately held companies. There’s also Amazon, which made a reported $40 million deal with Melania Trump on a documentary project. Plus personal donations to Trump from various tech CEOs, including Apple’s Tim Cook, who gave $1 million to his inaugural fund.

All of this amounts to a rounding error for the tech giants—averaged out, YouTube made more than $107 million from ad revenue every single day last quarter—but these are still acts of profound obsequiousness and corporate cowardice. There are any number of reasons they may have chosen to pay up: Perhaps the tech elite have become genuinely red-pilled, fear regulation, or don’t want to lose out on government contracts. They have good reason to worry about personal retribution (last year, Trump accused Meta CEO Mark Zuckerberg of plotting against him in the 2020 presidential election and said that he would “spend the rest of his life in prison” if he did so again). But in any case, by settling with Trump over these suspensions, the companies are effectively arguing that their content-moderation decisions following the insurrection were wrong. They are also arguing, in effect, that the government has the right to tell business owners what they can and cannot allow on their own platforms—a weak stance generally, and a weak stance on free speech specifically.

This is embarrassing for them, but they get something out of it, too. By settling, the companies can pivot toward dispensing with the work of moderation altogether. The decision to suspend Trump can serve for them as a cautionary tale of what happens when the platforms are made to make difficult editorial decisions. They’re given an excuse to take a lighter touch. They double down on the idea that they aren’t truly publishers, which reinforces their long-standing arguments that the owners of social platforms should not be held liable for what happens on the sites they run. And they attempt to do so with a straight face even as they tune their algorithms to alter what content users see.

This is precisely what Meta, X, and now YouTube appear to be doing. In January, Zuckerberg announced a plan to return “to our roots around free expression” by replacing Facebook and Instagram fact-checkers with a system of community notes. Under Musk, X has turned into a white-supremacist-friendly free-for-all of AI slop, Nazi propaganda, and autoplaying murder videos. (Community notes have been useful in some cases, but they’re not exactly consistent or fully adequate.) Last week, Alphabet, YouTube’s parent company, said it would reinstate the accounts of creators banned for spreading election-denial content and misinformation about COVID. “YouTube values conservative voices on its platform and recognizes that these creators have extensive reach and play an important role in civic discourse,” the company wrote in a recent statement to Congress about the decision. The New York Times recently reported that the platform would loosen rules around content, provided the videos “are considered to be in the public interest.”

Multiple things are happening here. The first is that demonstrably false beliefs that were once considered fringe or outrageous are now ideological pillars of the current administration: The 2020 presidential election was stolen; vaccines are very dangerous; January 6 was a civil gathering of patriots. This has led many authority figures in Silicon Valley (who were quite vocal at the time about the need to combat disinformation) to feel sheepish about difficult but quite rational decisions made during the pandemic and the aftermath of the 2020 election—a time of mass death followed by a crisis in which the peaceful transfer of power was horrifically disrupted.

The second is that the Big Tech platforms have, for years, begrudgingly agonized over content-moderation decisions. Facebook, as I wrote in January, is the prime example of this posture. The history of the company is of Zuckerberg making reactive, often totally contradictory decisions about what’s allowed. Facebook once claimed to be a neutral platform, only to get dragged in front of Congress, where it pledged to “secure elections.” For the better part of the 2010s, Twitter struggled to balance a desire for free-speech maximalism with scattershot attempts to quell harassment on the platform. Despite (and partly because of) its staggering size and reach, YouTube has been drawn into far fewer moderation controversies. But many of its largest moderation decisions—like its decision to take down thousands of bizarre child-exploitation videos in 2017—have been reactive, coming after inquiries from news organizations.

To better understand the extent of the messaging shift from these technology companies, it is worth revisiting their reactions after January 6. Alphabet CEO Sundar Pichai wrote in a note to employees just after the riots that “the lawlessness and violence occurring on Capitol Hill today is the antithesis of democracy and we strongly condemn it.” Four years later, Pichai stood on a dais to watch Trump take the oath of office.

Testifying before Congress in March 2021, Zuckerberg argued that Facebook did its part “to secure the integrity of our election,” and then “President Trump gave a speech,” he added, referencing when the president told his supporters, “If you don’t fight like hell, you’re not going to have a country anymore,” and urged them to head to the Capitol building, where lawmakers were certifying the results. “I believe that the former president should be responsible for his words and the people who broke the law should be responsible for their actions.” Zuckerberg also attended Trump’s 2024 inauguration. Musk didn’t own Twitter in 2021, but in a blog post at the time, the company called the insurrection “horrific” and was unequivocal in its justification for banning Trump, noting that his posts were “likely to inspire others to replicate the violent acts that took place on January 6, 2021, and that there are multiple indicators that they are being received and understood as encouragement to do so.”

You might notice that these statements and justifications are unusually clear and direct for tech companies and their executives. They aren’t full of vague bromides about community or civic discourse. They reflect the gravity of the moment they are describing—a violent mob smashing windows, assaulting police officers, and breaking into the Capitol building to attempt to overturn the results of a presidential election. Twitter’s statement—a dispatch from a company that no longer really exists—is perhaps the most revealing in that it connects actions on the platform to real-world harm. By settling their lawsuits with Trump, the companies are insinuating that these statements and corresponding enforcements were part of some kind of collective hysteria. In reality, they were the opposite: a rare moment of clarity—a realization that their actions and inactions have consequences for their users and the world.

The job of content moderation at Facebook, YouTube, or even X scale is extremely difficult, bordering on impossible. It requires a level of monitoring that only finicky and error-prone automated systems can handle. It must take place on a global scale and require immense resources. Even then, the systems and people working inside them will make honest mistakes. Most important, it means having to come up with a set of rigid ideological principles and rules and enforce them consistently, making difficult calls on nuanced edge cases involving high-stakes actors and events. It’s grinding work that can require exposing low-paid moderators to the absolute worst of humanity. Sometimes there is no clear, right answer on a given ruling. None of this is easy or fun, but it is the work of governance, of responsibility. It is what the money is for, and it comes with the territory of the heady mission statements that tech companies embrace: organizing the world’s information or connecting the world or becoming the global town square. It’s precisely the work these companies would rather not have to do.

In her best-selling memoir this year, the former Facebook employee Sarah Wynn-Williams wrote of the company’s executives that “the more power they grasp, the less responsible they become.” These words are also as good an epigraph for the Trump era as any. Rereading them in light of Big Tech’s full capitulation to the current administration makes clear that, although these about-faces are politically convenient, they reflect a broader harmony between the tech platforms and the MAGA movement. So much of Trump’s core appeal to his supporters is that he offers permission to behave in his image—to live shamelessly but also to enjoy a life of impunity and operate without having to realize that one’s actions have broader consequences for others. It is, in other words, an invitation to simultaneously grow more powerful and less responsible.

Big Tech’s MAGA pivot is cynical, cowardly, and self-serving. It is also a perfect match.


From The Atlantic via this RSS feed