Lawyers representing Avianca, a Colombian airline, in a lawsuit have been found to have submitted a brief full of made-up legal cases that were generated by a chatbot. The opposing counsel pointed out the nonexistent cases, and US District Judge Kevin Castel confirmed that six of the submitted cases were bogus judicial decisions with bogus quotes and bogus internal citations. Lawyer Steven A. Schwartz admitted in an affidavit that he had used OpenAI’s chatbot for his research and asked the chatbot if it was lying to verify the cases. The chatbot maintained that the cases were real.

The Issue

The opposing counsel made the court aware of the issue in detail, recounting how the lawyers’ submission was a brief full of lies. In one example, a nonexistent case called Varghese v. China Southern Airlines Co., Ltd., the chatbot appeared to reference another real case, but got the date and other details wrong, saying it was decided 12 years after its original 1996 decision.

Schwartz says he was unaware that the content could be false and regrets having utilized generative artificial intelligence to supplement the legal research performed and will never do so in the future without absolute verification of its authenticity. Schwartz isn’t admitted to practice in the Southern District of New York but originally filed the lawsuit before it was moved to that court and says he continued to work on it. Another attorney at the same firm, Peter LoDuca, became the attorney of record on the case, and he will have to appear in front of the judge to explain just what happened.

The Implications

This incident highlights the absurdity of using chatbots for research without double (or triple) checking their sources somewhere else. Microsoft’s Bing debut is now infamously associated with bald-faced lies, gaslighting, and emotional manipulation. Google’s AI chatbot, Bard, made up a fact about the James Webb Space Telescope in its first demo. Bing even lied about Bard being shut down in a hilariously catty example from this past March.

It is evident that being great at mimicking the patterns of written language to maintain an air of unwavering confidence isn’t worth much if you can’t even figure out how many times the letter ‘e’ shows up in ketchup. The consequences of using AI chatbots for research can be dire, as it can lead to false information being provided to the court and can result in sanctions being imposed on the lawyers for misconduct.

The use of AI chatbots for legal research should be approached with caution, and their outputs should always be verified through reliable sources. The incident involving the Avianca lawsuit lawyers demonstrates the importance of ensuring the authenticity of legal research and the consequences of relying on unreliable sources. The use of AI chatbots in legal research is still in its early stages, and it is essential to exercise caution until their accuracy and reliability can be established.

Tech

Articles You May Like

A New Era for Dungeons & Dragons: The 2024 Player’s Handbook
Pacific Drive: Navigating the Fine Line Between Challenge and Accessibility
The Phenomenon of Five Nights at Freddy’s: A Deep Dive into Its Success and Collectible Appeal
The Anticipation Surrounding The Legend of Zelda: Echoes of Wisdom

Leave a Reply

Your email address will not be published. Required fields are marked *