AI-Generated Errors Delay Australian Murder Trial, Lawyer Apologizes
A senior lawyer in Australia has publicly apologized to a judge after filing legal submissions in a murder case that contained fabricated quotes and citations to non-existent case judgments, all generated by artificial intelligence. This significant blunder in the Supreme Court of Victoria state marks another instance in a growing list of AI-related mishaps impacting justice systems worldwide.
Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King’s Counsel, took “full responsibility” for the erroneous information submitted in the case of a teenager charged with murder. Court documents confirm Nathwani’s Wednesday apology to Justice James Elliott, stating, “We are deeply sorry and embarrassed for what occurred.” The AI-generated errors caused a 24-hour delay in the resolution of the case, which Justice Elliott had hoped to conclude earlier. On Thursday, Elliott ultimately ruled Nathwani’s client, who cannot be identified due to his minor status, not guilty of murder on grounds of mental impairment.
Justice Elliott expressed clear dissatisfaction with the situation. “At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,” he told the lawyers, emphasizing that “the ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice.” The fabricated submissions included fictitious quotes purportedly from a speech to the state legislature and non-existent case citations attributed to the Supreme Court itself.
The errors came to light when Justice Elliott’s associates, unable to locate the cited cases, requested copies from the defense team. The lawyers subsequently admitted that the citations “do not exist” and that the submission contained “fictitious quotes.” They explained that while they had verified the initial citations, they wrongly assumed the remaining AI-generated information would also be accurate. Notably, the submissions were also sent to prosecutor Daniel Porceddu, who did not independently verify their accuracy. Justice Elliott highlighted that the Supreme Court had issued guidelines last year on the use of AI by lawyers, explicitly stating, “It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified.” The specific generative AI system used by the lawyers was not identified in the court documents.
This Australian incident is not isolated. In 2023, a federal judge in the United States imposed $5,000 fines on two lawyers and their law firm after they submitted fictitious legal research, which they attributed to ChatGPT, in an aviation injury claim. Judge P. Kevin Castel, while acknowledging their apologies and corrective actions, deemed their initial conduct to be in bad faith. Later that same year, more invented court rulings, again generated by AI, appeared in legal papers filed by lawyers representing Michael Cohen, former personal lawyer to U.S. President Donald Trump. Cohen accepted blame, stating he was unaware that the “Google tool” he was using for legal research was capable of producing such “AI hallucinations.” Across the Atlantic, British High Court Justice Victoria Sharp issued a stark warning in June, indicating that presenting false material as genuine could lead to charges of contempt of court or, in the most severe instances, perverting the course of justice, a crime carrying a maximum sentence of life in prison. These cases collectively underscore the critical challenges and potential pitfalls as legal systems grapple with the integration of rapidly evolving artificial intelligence technologies.