United States District Judge Brantley Starr of the Northern District of Texas has ordered that lawyers appearing before him attest that they did not use artificial intelligence (AI) to compose their legal submissions without human review.
In an interview, Judge Starr stated his goal to warn attorneys about the hazards of relying only on AI-generated material without personally checking its veracity. He even threatened attorneys who failed to comply with punishment.
According to the judge’s order, attorneys must file a certificate on the docket confirming that their filings were not drafted using generative artificial intelligence or that any language generated by AI was cross-checked for accuracy by a human using reliable sources such as print reporters or traditional legal databases.
He claims While AI systems have proven to be effective for a variety of legal activities such as form divorces, discovery requests, and mistake detection, they are not appropriate for legal briefing. This is because they are currently prone to hallucinations and prejudices, and may contain false information, including quotes and citations, and lacks the dependability and impartiality necessary in the legal profession.
This requirement follows recent events in which another federal judge in Manhattan threatened to sanction Attorney Steven Schwartz for including citations to fictitious cases generated by ChatGPT, an AI tool. Schwartz, in a sworn statement, expressed remorse for relying on the AI tool and claimed ignorance of the possibility of false information.
The sources for this piece include an article in TechCrunch.