In a startling development within the legal domain, Michael Cohen, previously serving as Donald Trump’s lawyer, found himself in a peculiar situation due to the use of artificial intelligence in generating legal case references. This incident, taking place in the federal court of Manhattan, marks a significant instance of AI’s escalating role in numerous professional arenas, including the legal field.
Cohen, who had earlier admitted guilt to various charges such as tax evasion, campaign finance breaches, and deceit towards Congress, was under judicial monitoring post his prison term. In an attempt to conclude this supervision prematurely, his legal representative, David M. Schwartz, presented a motion in court. Unforeseen to Cohen, this motion included phony legal case references.
These fabricated references were the result of using Google Bard, an AI tool similar to ChatGPT, for legal research purposes. Cohen, misunderstanding it for a sophisticated search engine, did not recognize that Google Bard, akin to its counterpart in Microsoft’s Bing, can create non-factual content. This error, often referred to as “hallucination” in AI terminology, led to the inclusion of these fictitious legal cases in the motion.
The exposure of these false references occurred when Judge Jesse Furman, overseeing the case, inquired about the origin of these dubious citations. Cohen, no longer a practicing lawyer, resorted to online platforms for legal research, lacking access to official legal databases. His use of Google Bard, a newly launched service in the rapidly growing AI sector, unintentionally resulted in this legal blunder.
Schwartz, a close associate and Cohen’s attorney, is believed to have overlooked verifying these references before filing them. Nonetheless, Cohen has requested leniency for Schwartz, suggesting that this oversight was a genuine mistake rather than an attempt at deceit.
Adding to the complexity, Schwartz had assumed that another lawyer of Cohen’s, E. Danya Perry, had reviewed the draft submissions, a claim that Perry has categorically refuted. Upon identifying the fraudulent references, Perry immediately reported them to the court and federal prosecutors.
This case is not singular. A similar incident occurred earlier in the same Manhattan federal court, where attorneys faced penalties for using fictitious cases generated by ChatGPT in their legal documents.
This narrative gains additional significance considering Trump’s ongoing legal issues. In a different case in New York’s state court, Trump has pleaded not guilty to 34 felony counts related to falsifying records in his private business, charges linked to payments made to silence allegations. He has also denied guilt in three additional criminal cases, criticizing these accusations as tactics to hinder his potential 2024 presidential run.
Cohen’s inadvertent error with AI-generated legal citations not only sheds light on the challenges and risks associated with new technologies in professional settings but also adds another dimension to the existing legal controversies involving Trump and his circle. This case acts as a warning about the importance of thoroughness and validation in today’s digitally driven world.