
Artificial intelligence is here, and it’s wreaking havoc on court rooms throughout the US.
The latest AI law blunder comes from the Maryland appellate court, where a family lawyer representing a mother in a custody battle was caught filing court briefs cooked up with ChatGPT.
The Daily Record, which publishes summaries of Maryland court opinions, reported that the mother’s lawyer submitted a complaint for divorce gushing with AI hallucinated legal citations which made it into the court record.
Like other ChatGPT legal muckups, many of the citations referenced case law which simply did not exist. The filing also contained existing legal citations which contradicted the arguments made in the brief.
In his defense, the attorney, Adam Hyman, said that he “was not involved directly in the research of the offending citations.” Instead, he blamed a law clerk he says used ChatGPT to find the citations, as well as to edit the brief before sending it on.
Per a later filing, Hyman wrote that the clerk wasn’t aware of the risks of AI hallucinations, the phenomenon in which chatbots make up false information to satisfy users’ queries. When the clerk forwarded a draft of the erroneous brief, the lawyer said he didn’t vet the cases referenced, and added that he “does very little appellate work.”
Of course, that’s a terrible excuse; as a lawyer, it’s his job to review what clerks write for accuracy — and to work with them to understand proper workflows and standards for legal writing. In an opinion filed after the fact, Maryland appellate Judge Kathryn Grill Graeff wrote that “it is unquestionably improper for an attorney to submit a brief with fake cases generated by AI.”
“[C]ounsel admitted that he did not read the cases cited. Instead, he relied on his law clerk, a non-lawyer, who also clearly did not read the cases, which were fictitious,” the judge scathed. “In our view, this does not satisfy the requirement of competent representation. A competent attorney reads the legal authority cited in court pleadings to make sure that they stand for the proposition for which they are cited.”
Grill Graeff noted that a blunder like this wouldn’t usually call for an opinion — which sets legal precedent — but that she wanted to “address a problem that is recurring in courts around the country”: AI in the courtroom.
As part of the process, Hyman was required to admit responsibility for the improper case citations, as he was the only licensed attorney on the case. Both the lawyer and the clerk were required to complete “legal education courses on the ethical use of AI,” as well as to implement office-wide protocols for citation verification. Hyman was also referred to the Attorney Grievance Commission for further discipline.
Grill Graeff noted that this was the first time that Maryland’s appellate courts had to address the problem — though if recent trends are any indication, it certainly won’t be the last.
More on ChatGPT: Judge Gives Humiliating Punishment to Lawyers Caught Using AI in Court