

Photograph by Nathaniel St. Clair
Hospitals denying maternal mental health care, employers using AI tools that drive workplace discrimination, and “phantom” nitrates in water supplies leading to chronic illness in children. These concerns sound like stories Project Censored highlights in its annual report of the most important but under-covered news stories.
Each of these topics and twelve additional stories like them were brought to our attention last June by Project Censored judge Nicholas Johnson. The author of How to Talk Back to Your Television Set (1970) and Your Second Priority (2008), and a former Federal Communications Commission commissioner (1966–1973), Johnson has served as one of Project Censored’s judges, helping the Project identify and vet its annual record of each year’s top “censored” stories, since the organization’s inception in 1976. When Nick contacts us with story tips, we pay attention.
A chatbot had identified these news reports after Nick directed it to provide fifteen examples of “potentially significant news stories, made public in publications with small circulations, that have not been given the attention of major media during the past year,” along with reasons why those stories should have been given more attention. Chatbots, such as OpenAI’s ChatGPT and Google’s Gemini, use generative AI to respond to user queries through written or spoken “conversations.”
At the time, Project Censored staff were undertaking our own review of the year’s top stories, as identified by students and their faculty mentors who participate in the Project’s Campus Affiliates Program, and vetted by the Project’s panel of esteemed judges. Our story review process involves five distinct stages of meticulous examination to determine each candidate story’s importance, timeliness, quality of sources, and, ultimately, trustworthiness. This effort involves hundreds of hours of human effort, so we all smiled when Nick’s email message arrived with the winking salutation, “Sorry to be so late in getting this to you, but it took my AI at least 15 seconds to do it.”
Here’s one example from the story list Nick sent us, based on his chatbot’s response:
“Walmart’s Quiet Drone-Surveillance Rollout”
—Publication: The Plains Weekly Register (circ. ~4,000)
—Scoop: Leaked local franchise agreements show Walmart testing AI‐drone patrols in five rural states—without public notice or aerial-privacy rules.
—Why It Matters: Sets a precedent for commercial drone policing private property—civil liberties groups must litigate, yet the story died regionally.
At the conclusion of its report, the chatbot produced this analytic summary:
These stories highlight systemic gaps in coverage—when national outlets focus on high-profile crises, they often miss crises brewing in our backyards: local infrastructure failures, emerging health threats, new dimensions of environmental degradation, and AI’s stealth intrusions into daily life. Each tale carried not just local but national (even global) implications, and broader attention could have spurred swifter policy, regulatory, or public-health responses.
Reading this, we were impressed. That paragraph sounds as if it could have been lifted from a previous volume of the Project’s State of the Free Press yearbook.
Perhaps, in a way, it was.
We quickly determined that not only were all of the “potentially significant” news stories on the chatbot’s list fabricated, but so too were all fifteen of the allegedly independent news organizations credited with breaking those stories. The chatbot made it all up, but did not disclose that it was generating fictional information rather than providing factual answers.
To train datasets for large language models like ChatGPT and Gemini, tech developers scrape the internet, cataloging and extracting data from every corner of the web—often without the original creators’ knowledge or permission, much less any financial compensation. In theory, the chatbot’s list of stories that Nick shared could have been modeled after Project Censored’s annual Top 25 lists, which are archived on the Project’s website.
But because chatbots, unlike Project Censored, do not abide by guiding ethical principles of journalism, such as seeking the truth, with transparency and accountability, while minimizing harm, they simply mimic reporting that highlights societal inequities, without understanding the underlying context, sources, or human experiences that give stories, like the ones they’re generating, meaning.
Chatbots can reproduce the appearance of investigative journalism —which, at its best, uncovers corruption, censorship, or injustice—but they lack the moral and analytical frameworks to properly verify facts, assess motives, and weigh the potential consequences of their reporting. Celine Schreiber of Weave News describes this as the “risk of replication without representation: A simulacrum of independent journalism that lacks its political or community roots.”
Corporate media and tech companies refer to these errors as “hallucinations”—when AI systems literally make stuff up—a term that both anthropomorphizes the bots and downplays the consequences of perpetuating inaccuracies, both regular pitfalls of reporting on AI. Developers do not entirely know why hallucinations occur, so they have no way to stop them.
“Despite our best efforts, they will always hallucinate,” Amr Awadallah, the chief executive of Vectara, a start-up that builds AI tools for businesses, and a former Google executive, told the New York Times last May. “That will never go away.”
In October 2025, the BBC, in partnership with the European Broadcasting Union, published an extensive study, covering twenty-two public media service companies in eighteen countries, that found AI assistants such as ChatGPT, Copilot, and Gemini “misrepresent” news content roughly 45 percent of the time. About 31 percent of the responses researchers collected demonstrated serious “sourcing problems—missing, misleading, or incorrect information,” and about 20 percent “contained major accuracy issues.” As EBU’s deputy director general Jean Philip De Tender noted, “These failings are not isolated. … They are systemic, cross-border, and multilingual, and we believe this endangers public trust.”
According to Pew Research, while relatively few Americans currently use AI chatbots like ChatGPT to obtain news information, 42 percent of those who do report that “they generally find it difficult to determine what is true and what is not.” As more users turn to AI systems rather than traditional search engines to find information online, society faces a deepening crisis of misinformation—one in which greed, competitive pressure, and unchecked technological expansion continue to erode public trust in media.
Recent research documents the potential for social media platforms to manipulate public opinion. Tech companies now control the tools for accessing information and the metrics of visibility, consistently shaping public discourse. Public vigilance and scrutiny are vital as AI strengthens its grip on our collective reality.
The chatbot’s Walmart “story,” mentioned above, closely resembles an actual news story from Project Censored’s list of this year’s most censored stories—a report by Jacobin about Amazon and Walmart using hostile surveillance technology against warehouse employees. The uncanny resemblance between the chatbot’s fabricated report and Jacobin’s real exposé underscores the urgent need for critical media literacy (CML), which empowers people not only to assess the trustworthiness of specific media messages but also to understand the power dynamics that shape those messages’ production.
Increasingly, those power dynamics include the role of chatbots and other AI-powered systems in filtering, blockading—and sometimes fabricating—the kind of information and perspective people need in order to be informed and actively engaged. For fifty years, people working with Project Censored—professors, students, media scholars, not machines—have scoured an increasingly large, diverse array of independent outlets to identify, validate, and highlight important but underappreciated news stories. Reflecting the essential role of a free press in a functioning democracy, Project Censored remains committed to serving the public good, rather than private interests, by exposing social problems and empowering people to respond to them.
Critical media literacy demands examining media for its power and purpose by taking a closer look at ownership, production, and distribution. Carefully following these trails combats AI misinformation. Without CML, as evidenced by Nick’s chatbot’s censored stories list, the information AI provides has become harder to distinguish from the truth.
This first appeared on Project Censored.
The post When Algorithms Break the News appeared first on CounterPunch.org.
From CounterPunch.org via this RSS feed


