AI Blunder in Australian Court: Lawyer Files Fake Submissions

Fastcompany

A prominent Australian lawyer has issued an apology to a Supreme Court judge after submitting legal documents in a murder trial that contained fabricated quotes and non-existent case judgments, all generated by artificial intelligence. This incident in Victoria’s Supreme Court marks yet another instance of AI-induced errors disrupting justice systems worldwide.

Defense lawyer Rishi Nathwani, a King’s Counsel, accepted “full responsibility” for the inaccuracies in the filings for a teenager charged with murder. “We are deeply sorry and embarrassed for what occurred,” Nathwani conveyed to Justice James Elliott on behalf of the defense team. The AI-generated errors caused a 24-hour delay in the proceedings, which Justice Elliott had hoped to conclude earlier. The following day, Elliott ruled that Nathwani’s client, a minor whose identity remains protected, was not guilty of murder due to mental impairment.

Justice Elliott expressed his strong disapproval of the situation, stating, “The manner in which these events have unfolded is unsatisfactory.” He underscored the critical importance of reliable legal submissions, adding, “The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice.” The deceptive filings included quotes falsely attributed to a speech in the state legislature and case citations purportedly from the Supreme Court that did not exist.

The errors came to light when Justice Elliott’s associates attempted to verify the cited cases and found them missing, prompting them to request copies from the defense team. The lawyers subsequently admitted that the citations “do not exist” and that the submission contained “fictitious quotes.” They explained that while they had verified initial citations, they had mistakenly assumed the others would also be accurate. The misleading submissions had also been sent to prosecutor Daniel Porceddu, who had not checked their veracity. The specific generative AI system used by the lawyers was not identified in court documents.

This Australian blunder echoes similar challenges faced by legal systems globally. In the United States in 2023, two lawyers and their firm were fined $5,000 by a federal judge after their submission in an aviation injury claim was found to contain fictitious legal research, attributed to ChatGPT. Judge P. Kevin Castel, while noting their bad faith, credited their apologies and corrective actions, opting for less severe sanctions. Later that year, more AI-invented court rulings appeared in legal papers filed by lawyers for Michael Cohen, former personal lawyer to U.S. President Donald Trump. Cohen took responsibility, admitting he was unaware that the Google tool he was using for legal research was capable of producing what are known as “AI hallucinations.”

The judiciary has been proactive in addressing these emerging issues. Justice Elliott noted that the Supreme Court of Victoria had issued guidelines last year on the use of AI by lawyers. These guidelines emphasize that “It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified.” Across the Atlantic, British High Court Justice Victoria Sharp warned in June that presenting false material as genuine could lead to charges of contempt of court or, in the most severe instances, perverting the course of justice, a crime carrying a maximum sentence of life in prison. These incidents underscore a growing concern among legal professionals and courts worldwide: the imperative to ensure that the integration of artificial intelligence enhances, rather than undermines, the foundational principles of accuracy and integrity within the justice system.