New York lawyer in trouble after ChatGPT cites fictitious court cases

File picture

Amid the growing hype surrounding artificial intelligence (AI) in recent months, concerns about job displacement have become a common topic of discussion.

One New York lawyer, Steven Schwartz of the firm Levidow, Levidow, and Oberman, may experience this fear turning into a reality sooner than expected.

However, the reason behind it might not be what you would typically assume.

According to The New York Times, Schwartz recently sought assistance from OpenAI's chatbot, known as ChatGPT, to aid him in writing a legal brief, resulting in catastrophic consequences.

Schwartz's firm has been representing Roberto Mata, who claims to have suffered injuries during a flight operated by Colombian airline Avianca to John F. Kennedy International Airport in New York City. As Avianca requested a federal judge to dismiss the case, Mata's lawyers filed a 10-page brief outlining their arguments in favour of proceeding with the lawsuit. The document referenced more than half a dozen court decisions, including cases such as "Varghese v. China Southern Airlines," "Martinez v. Delta Airlines," and "Miller v. United Airlines".

However, an unexpected twist emerged when it was discovered that none of the cited court decisions actually existed. ChatGPT, the AI chatbot, had fabricated all of them.

In an affidavit filed on Thursday, Schwartz revealed that he had used ChatGPT to "supplement" his research for the case.

He expressed his unawareness of the possibility that the chatbot's content could be false.

In fact, he even provided screenshots as evidence of his interaction with ChatGPT, demonstrating that he had specifically asked the programme about the authenticity of the cited cases. ChatGPT responded affirmatively, claiming that the decisions could be found in reputable legal databases like Westlaw and LexisNexis.

Schwartz expressed deep regret for relying on ChatGPT and vowed to never use it again without absolute verification of its authenticity. However, the consequences of his actions remain uncertain. The judge presiding over the case has scheduled a hearing on June 8 to discuss potential sanctions in light of this "unprecedented circumstance" created by Schwartz's reliance on the chatbot.

More from Quirky