We’ve spent years documenting the challenges of crafting sensible AI policy, from Biden’s misguided plan, to various state-level attempts at regulation. Now Trump’s AI Action Plan has landed, offering a striking example of how even potentially useful policy ideas can be corrupted by political theater and special interests.
The plan reflects the deep influence of the venture capital crowd that has cozied up to the administration, while simultaneously embracing culture war rhetoric that undermines its own stated goals. Like Biden’s approach, it’s deeply flawed—though in different and possibly more damaging ways. Still, the plan could have been much worse (indeed, I expected it to be much worse).
Let me break this down into the good, the bad, and the incredibly stupid.
The (Surprisingly) Good
Buried beneath the MAGA rhetoric, there are actually some decent policy ideas here. The plan correctly identifies that the current regulatory patchwork is a mess—having 50 different state approaches to AI regulation is genuinely problematic for innovation.
The emphasis on open-source and open-weight models is smart policy. These models democratize access to AI capabilities and prevent lock-in to big tech platforms.
Open-source and open-weight AI models are made freely available by developers for anyone in the world to download and modify. Models distributed this way have unique value for innovation because startups can use them flexibly without being dependent on a closed model provider. They also benefit commercial and government adoption of AI because many businesses and governments have sensitive data that they cannot send to closed model vendors. And they are essential for academic research, which often relies on access to the weights and training data of a model to perform scientifically rigorous experiments.
Given the cronyism we’ve seen of late in the Trump administration, it wouldn’t have been surprising to see them back off this commitment to open source and open weights, which are going to be absolutely necessary to avoid lock-in with giant centralized AI systems.
And hey, at least they’re not trying impose some kind of mandatory licensing scheme that would do real damage.
The Bad
But then we get to how they’re implementing all this, and it’s just… not great at all.
The whole framing around “woke AI” is pure culture war nonsense dressed up as policy. The executive order demanding that the federal government only use “unbiased AI principles” is particularly rich, since what they’re actually demanding is AI that’s biased toward their specific worldview. They want AI that prioritizes “truth-seeking” and “ideological neutrality”—but only their version of truth, and only neutral toward ideologies they don’t like.
The Order directs agency heads to procure only large language models (LLMs) that adhere to “Unbiased AI Principles” defined in the Order: truth-seeking and ideological neutrality.
*Truth-seeking means that LLMS shall be truthful and prioritize historical accuracy, scientific inquiry, and objectivity, and acknowledge uncertainty where reliable information is incomplete or contradictory.*Ideological neutrality means that LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas like DEI, and that developers will not intentionally encode partisan or ideological judgments into an LLM’s outputs unless those judgments are prompted by or readily accessible to the end user.
That the order specifically calls out DEI as an example of problematic bias is hilarious because it reveals they have zero understanding of how AI actually works. AI systems reflect the biases in their training data and the choices made by their developers. There’s no such thing as “neutral” AI—every system embeds certain assumptions and values.
As Elizabeth Nolan Brown writes at Reason, any effort for the government to create a “non-woke” AI is going to backfire on the entire industry:
The very act of trying to depoliticize or neutralize AI, when done by politicians, could undermine AI’s potential for neutral and nonpolitical knowledge dissemination. People are not going to trust tools that they know are being intimately shaped by particular political administrations. And they’re not going to trust tools that seem like they’ve been trained to disregard reality when it isn’t pretty, doesn’t flatter people in power, or doesn’t align with certain social goals.
Every AI system has biases. It has to. That’s how it works. There is no such thing as an unbiased AI. It’s just what kind of bias you want. And calling for “unbiased AI” is simply a very silly way to say “bias it the way I think it should be biased.” The best way to deal with this is… to go back to the earlier section here, to use more open models with open weights that can be adjusted by users, rather than letting anyone—companies or government—to fully control it.
So for all the talk of open models, to then try to pressure the government to only use “non-woke” models will actively limit the ability to use more open systems.
The Incredibly Stupid
Now we get to the really dumb stuff. Federal agencies will now have to waste time and resources figuring out whether AI systems are sufficiently “non-woke” for government procurement. Imagine being the poor bureaucrat who has to write the analysis exploring exactly how much historical accuracy an AI needs to display when asked about, say, the Civil War.
AI companies eyeing federal contracts now face a choice: do they create special “MAGA-compliant” versions of their models that give different answers depending on who’s asking? Do they just avoid federal contracts entirely? Either way, it’s a lose-lose that makes the government less effective and the market less efficient.
And, no, contrary to what some have said, this probably isn’t a First Amendment violation. Under the Supreme Court’s ruling in US v. American Library Association, the government can put some content-based restrictions on how federal funds are used. So while this policy is stupid and counterproductive, it’s likely constitutional. If they had gone further in trying to force AI systems to reflect their world view, it would be a clearer First Amendment violation. But all that is said here is that if they think your AI is too woke based on their own judgment, then the government is barred from using it.
That’s stupid, but probably not unconstitutional.
The end result? The US government will deliberately exclude potentially better AI tools from consideration based on ideological purity tests. That’s not exactly a recipe for maintaining technological leadership.
Off-Script Copyright Chaos
Then there’s Trump’s perhaps (?) improvised remarks about copyright, which weren’t in the official plan but are worth addressing. He basically said AI companies shouldn’t be expected to pay for every piece of content they train on because “China’s not doing it.”
“You can’t be expected to have a successful AI program when every single article, book or anything else that you’ve read or studied, you’re supposed to pay for,” he said. “You just can’t do it because it’s not doable. … China’s not doing it.”
He’s actually not wrong about the basic principle here—training AI models should generally fall under fair use. If it’s not, we are using copyright law to challenge the right to read, and that way leads to dangerous results many people aren’t considering in their rush to demonize AI companies.
But rather than thinking through the actual implications of that Trump’s focused just on what China is doing. China does a lot of things the US doesn’t do, and that generally doesn’t mean we should follow them down every path.
Instead, we should be looking for solutions that don’t involve destroying fair use, but are looking at ways to make sure content creators are supported. The Trump plan doesn’t have any of that, and if you asked the folks who wrote it, I’m sure they’d just respond with some nonsense about how cryptocurrency will solve it.
The Bottom Line
There are some genuinely good ideas in this action plan. But they’re wrapped in so much ideological nonsense and implemented so poorly that the net effect is probably negative.
We needed clearer AI policy from the federal government. Instead, we got culture war politics disguised as technology strategy. The result is a plan that will waste government resources, confuse the market, and probably make us less competitive globally—all while claiming to do the opposite.
But hey, at least they’ll have government chatbots that won’t offend their delicate sensibilities. That’s what matters, right?
From Techdirt via this RSS feed