In a recent year-end report, Chief Justice John Roberts of the U.S. Supreme Court highlighted the complex implications of artificial intelligence within the legal field. He urged for a careful approach, emphasizing the need for both caution and humility as this evolving technology continues to reshape the practices of judges and lawyers.
Roberts presented a nuanced perspective in his comprehensive 13-page report, acknowledging AI’s potential to improve access to justice for those with limited resources, revolutionize legal research, expedite case resolutions, and reduce costs. However, he also flagged concerns regarding privacy issues and the current limitations of technology in replicating human judgment and discretion.
“While I foresee the continued presence of human judges, I am equally certain that AI will significantly impact judicial processes, especially at the trial level,” wrote Roberts.
This commentary marks Roberts’ most extensive discussion to date on AI’s influence on the legal domain. Simultaneously, various lower courts are grappling with how best to integrate this new technology, which is capable of passing the bar exam but susceptible to producing fabricated content, referred to as “hallucinations.”
Roberts stressed the necessity for caution and humility in the utilization of AI. He highlighted an incident where AI-generated hallucinations led lawyers to cite fictitious cases in court documents—an action he cautioned against. Though not delving into details, Roberts noted that this phenomenon had garnered significant attention during the year.
In a recent development, Michael Cohen, former attorney for Donald Trump, admitted to unintentionally including fake case references, generated by an AI program, in official court filings. Similar instances of lawyers inadvertently incorporating AI-hallucinated cases into legal briefs have been documented.
Adding to the discourse, the 5th U.S. Circuit Court of Appeals in New Orleans recently introduced a proposed rule, possibly the first among the 13 U.S. appeals courts, aimed at regulating the use of generative AI tools such as OpenAI’s ChatGPT by attorneys presenting cases. This rule would necessitate lawyers to confirm either their non-reliance on AI programs for drafting briefs or the human review of AI-generated text for accuracy in their court submissions.