US federal judges admit to using AI on ‘error-ridden’ court orders

3 hours ago 895

Two US federal judges have admitted that staff in their chambers turned to artificial intelligence to help draft court rulings and that the experiment went badly wrong.

In a pair of candid letters made public on Thursday by Senator Chuck Grassley, the Chairman of the Senate Judiciary Committee, Judges Henry T. Wingate of Mississippi and Julien Xavier Neals of New Jersey said that AI tools were used in the preparation of court orders that were later found to be riddled with factual mistakes and legal missteps. Both decisions have since been retracted.

Grassley, who had demanded explanations, said, “Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law.” 

Staff missteps expose limits of AI in the courtroom

In his letter, Judge Neals of the District of New Jersey said that a draft ruling in a securities lawsuit had been released “in error — human error” after a law school intern used OpenAI’s ChatGPT for research without authorization or disclosure. The decision was promptly withdrawn once the mistake was discovered.

To prevent a recurrence, Neals said his chambers had since created a written AI policy and enhanced its review process.

Judge Wingate, who serves in the Southern District of Mississippi, said a law clerk used the AI tool Perplexity “as a foundational drafting assistant to synthesize publicly available information on the docket.” 

That draft order, issued in a civil rights case, was later replaced after he identified errors. Wingate stated that the event “was a lapse in human oversight,” adding that he has since tightened review procedures within his chambers.

Criticism of AI usage in legal work

The episode adds to a growing list of controversies involving AI-generated legal material. Lawyers in several US jurisdictions have faced sanctions in recent years for submitting filings drafted by chatbots that included fabricated case citations and misapplied precedents. 

Earlier this month, the New York state court system put out a new policy that restricts judges and staff from entering confidential, privileged, or non-public case information into public generative AI tools.

While the legal profession has been quick to explore AI’s potential to improve efficiency, the incidents have exposed the technology’s limitations, particularly its tendency to hallucinate, or generate plausible but false information. For courts, where the integrity, accuracy of rulings and the burden of proof are paramount, such lapses risk undermining public confidence in the justice system.

Grassley, who commended Wingate and Neals for owning up to the mistakes, also urged the judiciary to put in place stronger AI guidelines. 

The Administrative Office of the US Courts has not released comprehensive guidance on AI use, though several circuit courts are reportedly exploring frameworks for limited, supervised deployment. Legal scholars, on the other hand, are reportedly proposing a disclosure rule, which will require judges to publicly note any use of AI in their opinions or orders, in a manner similar to citation requirements for external research.

The incidents come as federal agencies and professional bodies continue to grapple with questions about AI accountability.

If you're reading this, you’re already ahead. Stay there with our newsletter.

Read Entire Article