[ad_1]
A federal judge in the US has told lawyers that he will not permit any content generated by AI to be used in his courtroom. Judge Brantley Starr in Texas, any lawyer who presents a case in his court must confirm that no part of their filing was created by AI, or if it was, it was reviewed by a human.
Recently, a lawyer tried using AI to help with legal research in court, but it provided false information. The incident may discourage other lawyers, but Judge Starr is taking steps to prevent it from happening again.
Like other judges of Texas’s Northern District, Starr has the opportunity to establish specific rules for his courtroom. Recently, a “Mandatory Certification Regarding Generative Artificial Intelligence” was added, reports Eugene Volokh, an American legal scholar.
Lawyers presenting at court will have to sign a document that prohibits them from using quotations, citations, paraphrased assertions, or legal analysis.
“All attorneys appearing before the court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being,” reads the order.
The judge acknowledged that AI platforms are beneficial in the legal field, offering assistance in divorce cases, discovery requests, spotting document errors, and predicting questions during oral arguments. However, the judge deemed them unsuitable for legal briefing due to their tendency to produce hallucinations and biased information. The judge elaborated that these platforms can even fabricate quotes and citations, which is why they are unreliable for legal briefings.
Judge Starr said that if a party believes a platform is accurate and reliable enough to use for legal briefing, they can request permission, providing an explanation for the same.
Recently, a lawyer tried using AI to help with legal research in court, but it provided false information. The incident may discourage other lawyers, but Judge Starr is taking steps to prevent it from happening again.
Like other judges of Texas’s Northern District, Starr has the opportunity to establish specific rules for his courtroom. Recently, a “Mandatory Certification Regarding Generative Artificial Intelligence” was added, reports Eugene Volokh, an American legal scholar.
Lawyers presenting at court will have to sign a document that prohibits them from using quotations, citations, paraphrased assertions, or legal analysis.
“All attorneys appearing before the court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being,” reads the order.
The judge acknowledged that AI platforms are beneficial in the legal field, offering assistance in divorce cases, discovery requests, spotting document errors, and predicting questions during oral arguments. However, the judge deemed them unsuitable for legal briefing due to their tendency to produce hallucinations and biased information. The judge elaborated that these platforms can even fabricate quotes and citations, which is why they are unreliable for legal briefings.
Judge Starr said that if a party believes a platform is accurate and reliable enough to use for legal briefing, they can request permission, providing an explanation for the same.
[ad_2]
Source link