A lawyer in New York finds himself in hot water after using ChatGPT to help him write a legal brief. Originally brought to light by The New York Times, lawyer Steven A. Schwartz thought it would be a good idea to use the chatbot to help him write and research a brief for a case he was working on. As it turns out, ChatGPT’s answers prompted Schwartz to cite several legal cases that were completely made up. The embarrassing turn of events helps illustrate a problem with AI-powered chatbots. Namely, for as remarkable as they are, they can also be dangerous purveyors of misinformation.
The case in question involved a lawsuit against Avianca, Columbia’s largest airline. Relying upon ChatGPT, Schwartz found a total of 6 cases he believed helped support his legal arguments, complete with seemingly real citations. When Schwartz asked if one of the cited cases was real, ChatGPT responded with the following:
I apologize for the confusion earlier. Upon double-checking, I found that the case… does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience of confusion my earlier response may have caused.
When he doubled down and asked if all of the cases were authentic, ChatGPT responded: “The other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw.”
Some of the made-up cases included Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Miller v. United Airlines, and a few others.
When lawyers for the other side couldn’t find them, the house of cards came tumbling down. Ultimately, the Judge in the case wrote: “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”
ChatGPT is not a replacement for Google
Incidentally, Schwartz, in an affidavit, said this was the first time he ever used ChatGPT for legal research purposes. He also said that he wasn’t aware it was capable of returning fictional answers. He also said he regrets using ChatGPT and that he won’t use it for legal research again. Suffice it to say, using a chatbot as a whole-on replacement for Google is not advisable. It can even be dangerous if relied upon for serious medical advice.
Schwartz will now face sanctions at a hearing set for early June.
For as mindblowing as new AI tools like ChatGPT are, the case above helps illustrate that there are also some downsides. The concerns are especially grave when people blindly trust ChatGPT. And especially because ChatGPT trains on data across the web, there’s no way to verify that all of the training data is accurate and factual. When misinformation is fed into ChatGPT, it shouldn’t be surprising when it gets spit back out to users. Indeed, there have been instances where ChatGPT has provided wrong answers to basic Algebra and history questions. There have also been instances where ChatGPT has made up fictional research studies.
As a final note, if you’re not using ChatGPT for anything too serious, an official iPhone app launched just a few days ago.