For almost an hour, police Sgt. Matt Gilmore and his K-9 dog, Gunner, searched for a group of suspects; during that time, a body camera recorded every word and bark.
The Oklahoma City police sergeant would typically take out his laptop and write out a report about the search for another thirty to forty-five minutes. However, he used artificial intelligence to write the first draft this time.
The AI program generated a report in eight seconds by compiling all the noises and radio conversations captured by Gilmore’s body camera microphone.
“It was a completely accurate account, better than anything I could have written. According to Gilmore, it flowed better. Even the color of the automobile the suspects fled in was mentioned by another officer, something he didn’t recall hearing.
Just a few police departments are experimenting with using AI chatbots to write incident report drafts, including the police department in Oklahoma City. Police officers who have used it are enthusiastic about the time-saving technology, but some legal experts, prosecutors, and police watchdogs are worried about how it can change a key document that determines who is charged with a crime and goes to jail or prison.
Constructed using the same technology as ChatGPT and marketed by Axon, the company most known for creating the Taser and being the leading provider of body cameras in the United States, it has the potential to be yet another “game changer” for law enforcement, according to Gilmore.
As stated by Axon’s founder and CEO Rick Smith, “they become police officers because they want to do police work; spending half their day doing data entry is just a tedious part of the job that they hate.” Draft One, the company’s new AI product, has received the “most positive reaction” of any product the company has introduced.
There are worries now, for sure,” Smith continued. He stated that district attorneys who are prosecuting a criminal case, in particular, want to make sure that police officers, and not just an AI chatbot, are writing their reports since police officers would have to testify in court about what they saw. Smith stated, “They never want to call an officer to the stand and say, ‘Well, the AI wrote that, I didn’t.'”
Police departments are no strangers to AI technology; they have long used computational tools to read license plates, identify suspects’ faces, identify gunshot noises, and anticipate potential crime scenes. Legislators have attempted to impose protections in response to concerns regarding civil rights and privacy regarding many of those apps. However, since AI-generated police reports are still relatively new, there aren’t many, if any, regulations governing their usage.
Among the many “deeply troubling” aspects of the new tool that Oklahoma City community organizer Aurelius Francisco discovered about from The Associated Press are worries that society’s racial biases and prejudices will make their way into AI. Francisco uses lowercase letters for his name as a way to rebel against professionalism.
Francisco, an Oklahoma City co-founder of the Foundation for Liberating Minds, said it’s troubling enough that the technology is being used by the same business that supplies Tasers to the department.
He claimed that automating those complaints will make it “easier for the police to monitor, harass, and violently attack members of the community.” It makes life more difficult for Black and Brown people while also making the cops’ jobs simpler.
Police authorities demonstrated the technology to local prosecutors before to its trial in Oklahoma City, and they were cautioned to exercise discretion when use it in situations involving significant financial risk. As of right now, it’s limited to reports of small incidents that don’t result in an arrest.
“There are no violent crimes, no felonies, or arrests,” stated Capt. Jason Bussert of Oklahoma City police, who is in charge of the department’s information technology and oversees 1,170-strong force.
That’s not the case in Lafayette, Indiana, where Police Chief Scott Galloway told the AP that since the trial program started earlier this year, Draft One has been “incredibly popular” and all of his officers can use it on any kind of case.
Or in Fort Collins, Colorado, where police Sgt. Robert Younger stated that while officers are free to use it on any kind of report, they have found that because to the “overwhelming amount of noise,” it isn’t very effective when used for patrols of the downtown bar district.
Axon experimented with computer vision to describe what is “seen” in the video clip, in addition to utilizing AI to evaluate and summarize the audio recording. However, Axon quickly realized that the technology was not ready.
Although some of the tested responses were not overtly racist, they were insensitive in other ways. Smith, the CEO of Axon, stated, “Given all the sensitivities around policing, around race and other identities of people involved, that’s an area where I think we’re going to have to do some real work before we would introduce it.”
As a result of those trials, Axon decided to concentrate just on audio for the product it revealed in April at its yearly police official conference.
The system uses OpenAI’s (San Francisco) generative AI model, which also enables ChatGPT. Microsoft, the cloud computing provider for Axon, and OpenAI have a close working relationship.
According to Noah Spitzer-Williams, who oversees Axon’s AI products, they have access to more knobs and dials than a ChatGPT user would, although using the same underlying technology. According to him, lowering the “creativity dial” makes the model more fact-aware and prevents it from embellishing or hallucinating in the identical manner that you could discover if you were utilizing ChatGPT alone.
Axon will not disclose the number of police agencies utilizing the system. It’s not the only supplier; comparable goods are also being pitched by businesses like Policereports.ai and Truleo. However, academics and law enforcement officials anticipate an increase in the use of AI-generated reports in the upcoming months and years due to Axon’s close ties to police agencies who purchase its body cameras and Tasers.
Legal expert Andrew Ferguson would prefer to see greater public discourse regarding the advantages and possible drawbacks prior to that. A police report may contain persuasive and difficult-to-notice lies because to hallucination, a condition where large language models used by AI chatbots are prone to fabricating misleading information.
Ferguson, a law professor at American University who is working on what is anticipated to be the first law review paper on the developing technology, expressed fear that police officers might become less careful with their writing because of automation and the ease of use of the technology.
A police report, according to Ferguson, is crucial in establishing if a suspicion raised by an officer “justifies someone’s loss of liberty.” Especially in cases of petty offenses, it is occasionally the sole testimony that a court sees.
Though it’s debatable whether is more trustworthy, Ferguson claimed that human-generated police reports are not without its shortcomings. Some cops who have used it have already noticed changes in how they respond to reported crimes. They are explaining what is going on so that the camera can better catch what they want to write about.
In describing what’s in front of them, Bussert anticipates that police will become “more and more verbal” as the technology gains traction. Using audio from the body camera, the program generated a narrative-style report in conversational English based on the video of a traffic stop that Bussert had entered into the system and hit a button. It included dates and times, much like an officer would have typed from notes.
Gilmore stated that it was finished in a matter of seconds and that he was left thinking that there was nothing left to alter. The officer has to check a box indicating the report was created using artificial intelligence at the conclusion.