World News

Lawyer apologizes for fake citations, fabricated verdicts produced by AI in murder case

Australia’s top lawyer has apologized to a judge for making representations in a murder case fake quotes included and non-existent case decisions produced by artificial intelligence.

The error in the Supreme Court of Victoria is one of the problems created by AI in justice systems around the world.

Defense attorney Rishi Nathwani, who holds the prestigious title of King’s Counsel, has charged himself with the full charge of filing false information in the case of a teenager charged with murder, according to court documents seen by the Associated Press on Friday.

“We are very sorry and ashamed of what happened,” Nathwani told Judge James Elliott on Wednesday, on behalf of the prosecution team.

The errors caused a 24-hour delay in resolving the case Elliott had hoped would be completed by Wednesday. On Thursday, Elliott ruled that Nathwani’s client, who cannot be named because he is still young, is not guilty of murder due to mental retardation.

“At the risk of understatement, the manner in which these events unfolded is not satisfactory,” Elliott told lawyers Thursday.

“The court’s ability to rely on the accuracy of counsel’s representations is essential to the proper administration of justice,” Elliott added.

The false submissions included false quotes from a speech to the state Legislature and quotes from non-existent Supreme Court cases.

The errors made by AI were discovered by Elliot’s colleagues, who could not find the cases cited and requested that the defense lawyers provide copies, the Australian Broadcasting Corporation previously reported.

The attorneys admitted that the quotes were “nonexistent” and that the presentation contained “false quotes,” court documents said.

The lawyers explained that they had checked that the original quotes were accurate and thought that the others would be correct.

The submissions were also sent to prosecutor Daniel Porceddu, who did not check their accuracy.

The judge noted that the Supreme Court had issued guidelines last year on how lawyers should use AI.

“It is not acceptable to use artificial intelligence unless the product of that use is independent and fully verified,” Elliott said.

Court documents do not identify the artificial intelligence program used by the lawyers.

In a similar case in the United States in 2023, a federal judge fined two lawyers and a law firm $5,000 afterward. ChatGPT was blamed for their submission of false legal research on an aircraft injury claim.

Judge P. Kevin Castel said they did wrong. But he praised their apology and the remedial steps they took in explaining why stronger sanctions were not needed to ensure that they or others would never again allow artificial intelligence tools that inspired them to present false legal history in their conversations.

Later that year, other false court decisions created by AI were cited in legal documents filed by Michael Cohen’s attorneysformer lawyer for US President Donald Trump. Cohen sued, saying he didn’t realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.

Britain’s Supreme Court Justice Victoria Sharp warned in June that giving false information as if it were true could be considered contempt of court or, “in the most serious cases,” perverting the course of justice, which carries a maximum sentence of life in prison.

The use of artificial intelligence has made its way into US courts in other ways. In April 2025, a man named Jerome Dewald appeared before a New York court and submitted a video containing an AI-generated image to present an argument on his behalf.

A month later, a man was killed in a road rage incident in Arizona “he spoke” during his killer’s sentencing hearing after his family used artificial intelligence to make a video of him reading a victim impact statement.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button