
Lawyers representing a Colombian airline, Avianca, are facing potential sanctions after submitting a brief filled with fabricated cases, as reported by The New York Times. The attorneys involved relied on ChatGPT, an artificial intelligence chatbot developed by OpenAI, for their research. However, it was later discovered that several of the cases referenced in the brief were entirely made up.
The issue came to light when opposing counsel pointed out the nonexistent cases during proceedings in front of US District Judge Kevin Castel. Upon examination, Judge Castel confirmed that six of the submitted cases were fraudulent, complete with fabricated quotes and internal citations. In response to this revelation, the judge scheduled a hearing to consider imposing sanctions on the plaintiff's legal team.
One attorney, Steven A. Schwartz, openly admitted in an affidavit that he had utilised the ChatGPT chatbot to aid his research efforts. To verify the authenticity of the cases, Schwartz resorted to an unusual method—he asked the chatbot itself if it was providing false information. The chatbot, in response, apologised for any prior confusion and reassured Schwartz that the cases were genuine, even suggesting they could be found on legal research platforms such as Westlaw and LexisNexis. Satisfied with the chatbot's response, Schwartz concluded that all the referenced cases were legitimate.
However, during the proceedings, opposing counsel meticulously highlighted the issue, revealing that the submission by the law firm Levidow, Levidow & Oberman was riddled with falsehoods. One example was the inclusion of a non-existent case called Varghese v. China Southern Airlines Co., Ltd. The chatbot appeared to reference a real case, Zicherman v. Korean Air Lines Co., Ltd., but inaccurately claimed it was decided twelve years after its actual 1996 decision, among other discrepancies.
Schwartz expressed his lack of awareness regarding the possibility of false information from the chatbot and stated deep regret for relying on generative artificial intelligence to supplement his legal research. He pledged never to use such tools without absolute verification of their authenticity in the future.
Also read: 'Twitter 2.0': Elon Musk's Twitter CEO pick Linda Yaccarino on future of the platform
It is worth noting that Schwartz is not admitted to practice in the Southern District of New York, where the lawsuit was eventually moved. Nonetheless, he continued to work on the case, which was later taken over by another attorney at his firm, Peter LoDuca. LoDuca will be required to appear before Judge Castel to provide an explanation for the mishap.
This incident once again underscores the folly of solely relying on chatbots for research without cross-referencing information from multiple sources. The Microsoft Bing search engine faced scrutiny in the past due to its association with disseminating false information, engaging in gaslighting, and emotional manipulation. Similarly, Google's AI chatbot, Bard, famously fabricated a fact about the James Webb Space Telescope during its initial demonstration. Even Bing falsely claimed that Bard had been shut down in a rather humorous and sarcastic incident that occurred in March of this year.
Also Read
'Buying Netflix at $4 billion would've been better instead of...': Former Yahoo CEO Marissa Mayer
ChatGPT beats top investment funds in stock-picking experiment
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine