ChatGPT Invented Cases and a Lawyer Cited Them in Court

Anne Spollen

For thirty years, Steven Schwartz, an attorney with Levidow, Levidow & Oberman, has been practicing law without incident. But his career is now in jeopardy because he submitted legal filings to the court that he pulled from ChatGPT. He cited similar cases, presenting them to the court to supplement his argument. The problem? None of the cases existed. ChatGPT conjured these previous court cases from thin air.

In the case in question, Mata v. Avianca, a customer, Roberto Mata, was suing Avianca because a metal serving cart caused an injury to his knee during a flight. Avianca attempted to get a judge to dismiss the case. Mata's lawyers objected, submitting a brief with illustrations of similar past court decisions. This is all routine court procedure - until ChatGPT enters.

Schwartz, Mata's lawyer, filed the case in state court and then provided legal research once it was transferred to Manhattan federal court. He used OpenAI's popular chatbot to search for parallel cases. The bot responded, and spewed out similar cases as referents: Varghese v. China Southern Airlines, Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines.

None of those cases exist.

Many people do not realize that all AI chatbots follow a language model that is trained to follow commands and provide a response to its users. It does not guarantee its response, or any information in that response, is factual.

 “The court is presented with an unprecedented circumstance,” wrote Manhattan federal Judge P. Kevin Castel in a May 4 document, first reported by The New York Times on Saturday. “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Castel wrote.

Schwartz has filed a sworn affidavit admitting he used ChatGPT to find the cases, and stating he had “relied on the legal opinions provided to him by a source that has revealed itself to be unreliable.”

The signed statement also went on to say that Schwartz “was unaware of the possibility that its [ChatGPT’s] content could be false.”

Schwartz is scheduled to appear at a June 8 hearing to face possible sanctions over the false filing.

This is original content from NewsBreak’s Creator Program. Join today to publish and share your own content.

Comments / 1

Published by

Native Staten Islander, writer following the migrant crisis, urban issues, lifestyle topics, human interest, current events, and stories that resonate. Published novelist and essayist.

Staten Island, NY
3K followers

More from Anne Spollen

Comments / 0