BLOG POST: Generative AI – The Future of Efficient Police Reporting?*

2025Blog Post

*This writing is a blog post. It is not a published IPTF Journal article.

Maeve Silk

Axon Draft One is an artificial intelligence (AI) tool that generates police reports using recorded audio from body cameras.[1] Police departments across the United States use such tools to increase efficiency.[2] Officers spending less time writing reports are free to spend more time directly interacting with, and protecting, members of their community.[3] If officers no longer draft police reports independently, their reduced direct involvement in the process—which reinforces accountability—could affect their policing methods.[4]

Many fear that the use of AI to generate police reports can harm criminal defendants.[5] False or misleading statements made by officers during an arrest may skew the content of AI-generated reports.[6] If the AI produces inaccurate information based on these statements, it could negatively affect the defendant’s case.[7] Furthermore, police departments have not been transparent about the data used to train the AI systems.[8] The training data used may fail to account for regional differences in policing.[9] If the data is biased, it could negatively affect the model’s output.[10] Many AI models are predictive, raising concerns about whether their outputs could bypass legal requirements, such as probable cause, by drawing from biased training data.[11] Since police reports are critical for fact-finding and justifying police activity,inaccurate AI-generated police reports could unfairly affect defendants’ rights during criminal trials.[12]

While AI presents many risks when used in drafting police reports, there are potential ways to safeguard its use.[13] In Utah, a bill proposes requiring AI-generated reports to be labeled and reviewed by officers for accuracy.[14] These safeguards could improve transparency and account for potential misinterpretations by AI.[15] Another safeguard would require sharing information about the data used to train the AI model alongside any AI-generated material presented in court.[16] Transparency about the parameters and linguistic choices set by the model is critical.[17] Edit logs can provide valuable information, as changes made by officers during their review can help evaluate the report’s accuracy.[18] Some AI models also have built-in protections.[19] For example, Axon’s Draft One software is not designed to be used to write reports on felonies or arrests.[20] Instead, it inserts dummy paragraphs that officers must delete to ensure they read the reports before submission.[21] Ultimately, the public and participants in the criminal justice system must clearly understand how these technologies are being developed and used.[22] Without oversight, these models could increase error and bias.[23]


[1] How Can Police Reports Created Using AI Affect Criminal Cases?, Connecticut Criminal Lawyer (Sept. 20, 2024), https://www.connecticutcriminallawyer.com/personal-injury-attorney-blog/how-can-police-reports-created-using-ai-affect-criminal-cases-ct.

[2] Id.

[3] Megan Gates, AI Meets Incident Reports: Considerations and Cautions for Field Use, ASIS (Jan. 20, 2025), https://www.asisonline.org/security-management-magazine/articles/2025/01/incident-reports/ai-incident-reports/.

[4] Jay Stanley, AI Generated Police Reports Raise Concerns Around Transparency, Bias, ACLU (Dec. 10, 2024), https://www.aclu.org/news/privacy-technology/ai-generated-police-reports-raise-concerns-around-transparency-bias.

[5] Id.

[6] Connecticut Criminal Lawyer, supra note 1.

[7] Id.

[8] Id.

[9] Andrew Guthrie Ferguson, Generative Suspicion and the Risks of AI-Assisted Police Reports, Northwestern L. Rev. forthcoming (Jan. 24, 2025), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4897632#.

[10] Connecticut Criminal Lawyer, supra note 1.

[11] Ferguson, supra note 9.

[12] Ferguson, supra note 9; Connecticut Criminal Lawyer, supra note 1.

[13] Matthew Guariglia, Utah Bill Aims to Make Officers Disclose AI-Written Police Reports, EFF (Feb. 21, 2025), https://www.eff.org/deeplinks/2025/02/utah-bill-aims-make-officers-disclose-ai-written-police-reports.

[14] Guariglia, supra note 13.

[15] See id. (positing that there are potential transparency and error issues in unregulated generative AI police reports).

[16]  Ferguson, supra note 9.

[17] Id.

[18] Id.

[19] Gates, supra note 3.

[20] Id.

[21] Id.

[22] See Guariglia, supra note 13 (explaining that transparency about the mechanisms and use of AI models that generate police reports is necessary for the public). 

[23] Stanley, supra note 4; Kristian Hammond, The Complex Promise and Perils of AI in Policing, Northwestern Center for Advancing Safety of Machine Intelligence (Sept. 9, 2024), https://casmi.northwestern.edu/news/articles/2024/the-complex-promise-and-perils-of-ai-in-policing.html.