AI Generated Evidence Is Finding Its Way Into Courtrooms - Here's What We Know

Generative artificial intelligence (AI) has seen a massive increase in usage across businesses, websites, and even for personal recreation. AI is often controversial, with schools worrying about students using it to cheat on homework and authors like George R. R. Martin suing ChatGPT for copyright infringement. Now, a new concern in AI use has arisen with its introduction in the courtroom.

A housing dispute went to the California courts in Mendones v. Cushman & Wakefield, Inc. A video was submitted that claimed to be witness testimony. Judge Victoria Kolakowski, however, felt something was not right with the video. It turned out that the video was a deepfake created by AI. Deepfake is a term commonly used to describe AI-created media that mimics someone's voice or appearance doing something they did not do. Despite the party submitting the deepfake arguing that the judge needed to prove it was AI, the case was dismissed.

This case has alarmed the legal system as well as people around the country. Though AI can have benefits in the courtroom, such as helping to clarify evidence or create models to improve understanding, it can also be used deceptively. People are concerned that fake audio recordings, photos, or videos of them could be used against them, and judges are worried about AI deepfakes sending innocent people to prison.

Efforts to address AI in the courtroom

With AI's surge in usage, many new laws and policies have had to spring up in response, such as California's law to require AI chatbots to confirm they aren't human. The National Center for State Courts (NCSC) released a bench card to help legal professionals determine if evidence was AI-generated. It contains a list of nine questions these professionals should consider when looking at evidence. These include getting details of how the evidence was obtained, establishing a chain of custody for the evidence, demanding disclosure of any edits made to the evidence, and more. It also encourages the use of forensic verification to confirm the evidence's authenticity and getting a copy of the metadata. The metadata would give more details about the evidence's origins, potentially showing it to be AI or even showing it to conflict with other, real evidence.

The Trump Administration released America's Action Plan in mid-2025. In it, there is a section that talks about the use of AI in the legal system. It suggests creating a program to analyze evidence for potential deepfakes, and to establish a standard for this analysis that could be used going forward. Trump already signed the TAKE IT DOWN Act intended to protect against deepfakes with sexual content. A Louisiana legislative session regarding Act No. 250 also discussed AI deepfakes. It states that any evidence suspected of being created by AI must be evaluated and confirmed to be authentic before it is admissible in court.

Can AI be a good thing in court?

You don't have to go far to find a wealth of legal action being taken against AI, such as parents suing OpenAI saying it contributed to their son's suicide. With all the concern about AI being used for deceptive purposes in the courtroom, there are also proposed benefits. NCSC makes a clear distinction between acknowledged and unacknowledged AI in the courtroom. Acknowledged would be when the use of AI is clearly stated and its purpose is made obvious. It would be used to enhance court proceedings. Unacknowledged would be when someone attempts to sneak in AI without stating what it is.

There are a few purposes for acknowledged AI in the courts. Videos and audio files can be enhanced with AI, an action that lawyers are already putting into place as needed. AI can be used to help with legal research and finding data in lengthy documents. AI can even be used to help identify people in surveillance videos.

However, these initiatives aren't without their risks. NCSC emphasizes the need to understand and manage the risks associated with relying on AI to do any task to ensure fairness and accuracy. When it comes to AI use in surveillance videos, it is also not perfect. There have been instances of wrongful arrests from AI usage in such videos. There's definitely a lot of work to be done in the future to manage AI and the legal system.

Recommended