We’ve already written several times about the danger posed by adding AI to law enforcement incident/arrest reports. There are a lot of obvious problems, ranging from AI misinterpreting what it’s seeing to it adding so much unsupported gibberish it isn’t the time-saver companies like Axon (formerly Taser) claim it will be.

The cost-effectiveness of relying on AI is pretty much beside the point, at least as far as the cops are concerned. This is the wave of the future. Whatever busywork can be pawned off on tireless AI tech will be. It will be up to courts to sort this out, and if a bot can craft “training and expertise” boilerplate, far too many judges will give AI-generated police reports the benefit of the doubt.

The operative theory is that AI will generate factual narratives free of officer bias. The reality is the opposite, for reasons that should always have been apparent. Garbage in, garbage out. When law enforcement controls the inputs, any system — no matter how theoretically advanced — will generate stuff that sounds like the same old cop bullshit.

And it’s not just limited to the boys in blue (who are actually now mostly boys in black bloc/camo) at the local level. The combined forces of the Trump administration’s anti-migrant efforts are asking AI to craft their reports, which has resulted in the expected outcome. The AP caught something in Judge Sara Ellis’s thorough evisceration of Trump’s anti-immigrant forces as they tried to defend the daily constitutional violations they engaged in — many of which directly violated previous court orders from the same judge.

Contained in the 200+ page opinion [PDF] is a small footnote that points to an inanimate co-conspirator to the litany of lies served up by federal law enforcement in defense of its unconstitutional actions:

Tucked in a two-sentence footnote in a voluminous court opinion, a federal judge recently called out immigration agents using artificial intelligence to write use-of-force reports, raising concerns that it could lead to inaccuracies and further erode public confidence in how police have handled the immigration crackdown in the Chicago area and ensuing protests.

U.S. District Judge Sara Ellis wrote the footnote in a 223-page opinion issued last week, noting that the practice of using ChatGPT to write use-of-force reports undermines agents’ credibility and “may explain the inaccuracy of these reports.” She described what she saw in at least one body camera video, writing that an agent asks ChatGPT to compile a narrative for a report after giving the program a brief description and several images.

The judge noted factual discrepancies between the official narrative about those law enforcement responses and what body camera footage showed.

AI is known to generate hallucinations. It will do this more often when specifically asked to do so, as the next sentence of this report makes clear.

But experts say the use of AI to write a report that depends on an officer’s specific perspective without using an officer’s actual experience is the worst possible use of the technology and raises serious concerns about accuracy and privacy.

There’s a huge difference between asking AI to tell you what it sees in a recording and asking it to summarize with parameters that claim the officer was attacked. The first might make it clear no attack took place. The second is just tech-washing a false narrative to protect the officer feeding these inputs to ChatGPT.

AI — much like any police dog — lives to please. If you tell it what you expect to see, it will do what it can to make sure you see it. Pretending it’s just a neutral party doing a bit of complicated parsing is pure denial. The outcome can be steered by the person handling the request.

While it’s true that most law enforcement officers will write reports that excuse their actions/overreactions, pretending AI can solve this problem does little more than allow officers to spend less time conjuring up excuses for their rights violations. “We can misremember this for you wholesale” shouldn’t be an unofficial selling point for this tech.

And I can guarantee this (nonexistent) standard applies to more than 90% of law enforcement agencies with access to AI-generated report-writing options:

The Department of Homeland Security did not respond to requests for comment, and it was unclear if the agency had guidelines or policies on the use of AI by agents.

“Unclear” means what we all assume it means: there are no guidelines or policies. Those might be enacted at some point in the future following litigation that doesn’t go the government’s way, but for now, it’s safe to assume the government will continue operating without restrictions until forced to do otherwise. And that means people are going to be hallucinated into jail, thanks to AI’s inherent subservience and the willingness of those in power to exploit whatever, whenever until they’ve done so much damage to rights and the public’s trust that it can no longer be ignored.


From Techdirt via this RSS feed