Former lawyer for Donald Trump, Michael Cohen, recently confessed to including fictitious, AI-generated court cases in a legal document presented to a federal judge. It was revealed in a filing unsealed on Friday that Cohen mistakenly utilized Google’s Bard, an AI chatbot, believing it to be an advanced search engine. Consequently, US District Judge Jesse Furman discovered that these cited cases did not exist and questioned Cohen’s attorney, David Schwartz, about the inclusion of these nonexistent cases. This development raises concerns about the use of AI technology in legal research and the potential consequences it may have on the legal system.
Unintentional Misleading
Cohen issued a written statement asserting that his intention was not to deceive the court and that he was unaware of the fraudulent nature of the cases provided by Google Bard. He admitted to using the AI chatbot for legal research and sharing the findings with his attorney, Schwartz. However, Cohen did not anticipate that his lawyer would incorporate the citations into the motion “without even confirming that they existed.” Cohen also highlighted his lack of knowledge regarding the risks associated with AI technology in the legal field. He claimed to have thought of Google Bard as a superior search engine and had previously utilized it successfully for accurate information.
This incident is not the first instance of AI-generated citations causing complications in legal proceedings. In a similar case in June, two lawyers from New York were fined and sanctioned for submitting bogus court cases generated by ChatGPT, another AI chatbot, in a legal brief. It is worth noting that some legal professionals have even begun using AI technology to draft legal arguments. For instance, rapper Pras Michél’s legal team is presently seeking a new trial after having received a guilty verdict, with AI technology playing a role in their defense strategy.
Potential Implications and Sanctions
The inclusion of AI-generated citations in legal documents brings attention to the potential risks and drawbacks associated with relying on emerging technologies for research. The use of fictitious court cases can undermine the integrity of the legal system and mislead judges and other legal professionals. As a result, David Schwartz, Cohen’s lawyer, may face sanctions for including these fraudulent citations in the motion. This incident highlights the need for legal professionals to stay informed about recent developments in legal technology and exercise caution when using AI-powered tools.
The incident involving Michael Cohen and the citation of AI-generated court cases raises important questions about the future of AI in legal research. While AI technology has the potential to enhance legal professionals’ efficiency and accuracy, it also presents significant risks if not properly understood and utilized. As AI continues to advance, legal practitioners must remain vigilant and educated about the capabilities and limitations of AI tools. Firm ethical guidelines and regulations regarding the use of AI in legal research may also need further development to maintain the integrity of the legal system.
The acknowledgement by Michael Cohen of his use of fake AI-generated court cases in a legal document underscores the potential dangers of relying on AI technology in legal research. This incident serves as a reminder of the need for comprehensive understanding and responsible use of AI tools in the legal field. As AI continues to evolve, it is crucial for legal professionals to adapt their practices and stay informed about emerging trends and potential risks in legal technology.
Leave a Reply