Draft One
Cops across the US are moving to embrace AI-written police reports — and according to The Associated Press, experts are sounding the alarm.
The AI tool, called “Draft One,” was announced by the police tech company Axon back in April. Axon — also the maker of tasers and other weapons — claims that the AI program uses OpenAI’s GPT-4 large language model to soundly generate police reports from cops’ body camera audio, and has marketed it as a productivity booster that can cut down on officers’ paperwork hours.
“If an officer spends half their day reporting, and we can cut that in half,” Axon CEO Rick Smith told Forbes at the time, “we have an opportunity to potentially free up 25 percent of an officer’s time to be back out policing.”
But as far as paperwork goes, police reports are more sensitive than your average email, and generative AI is a technology prone to what those in the industry call “hallucination” — in short, a catch-all term for the common errors like fabricated facts or otherwise incorrect information often found in synthetic text.
Even so, American police departments in states including Colorado, Indiana, and Oklahoma are starting to test the Draft One waters, with some departments even allowing officers to use the tool for any kind of case, as opposed to only minor incident reports. And experts, unsurprisingly, are worrying about the consequences. After all, a police report has a foundational role in investigative and legal processes; is it wise — or even ethical — to outsource it?
“I am concerned that automation and the ease of the technology,” Andrew Ferguson, an American University law professor who penned the first law review of AI-generated police reports, told the AP, “would cause police officers to be sort of less careful with their writing.”
Knobs and Dials
Axon, for its part, has defended the efficacy of its drafting tool, with the company’s AI product manager Noah Spitzer-Williams telling the AP that they have “access to more knobs and dials” than the “average ChatGPT user would have.” He added that Axon has turned down GPT-4’s “creativity dial,” for example, which he says limits Draft One’s potential to “embellish or hallucinate” like ChatGPT does.
Dials and knobs aside, though, this kind of automation is still rife with legal and ethical questions. Sure, like machines, humans make mistakes; they also certainly have biases.
But effective policing centers on humanity, so when you start to chip away at bits and pieces of that with automated tools, it’s worth a public discussion about what might be lost in the process. That includes people’s lives — many of which have already been horrifically harmed by the introduction of not-yet-ready AI programs into law enforcement investigations.
“The open question,” Ferguson wrote in his review, published last month, “is how reliance on AI-generative suspicion will distort the foundation of a legal system dependent on the humble police report.”
More on AI and police: We’re Losing Our Minds at This “Computer Generated” Sketch of a Police Suspect
Share This Article