Home Technology Former ‘fixer’ of Trump admits using Google Bard for ‘fake’ legal cases – Times of India

Former ‘fixer’ of Trump admits using Google Bard for ‘fake’ legal cases – Times of India

0
Former ‘fixer’ of Trump admits using Google Bard for ‘fake’ legal cases – Times of India

[ad_1]

Michael Cohen, former lawyer and “fixer” for Donald Trump, has admitted to using Google‘s AI language model Bard to fabricate citations for non-existent court cases in legal documents. In a rather lame defence, Cohen said that he thought Bard was a “super-charged search engine” and not an AI-powered chatbot. Cohen’s admission came to light during a recent court hearing regarding his ongoing legal troubles.He confessed to using Bard to generate references to non-existent legal rulings in an attempt to bolster his arguments.
According to a report by The Verge, Cohen also said that he wasn’t aware of the latest tech trends and the risks associated with them. “As a non-lawyer I have not kept up with emerging trends (and related risks) in legal technology and did not know that Google Bard was a generative text service that, like Chat-GPT, could show citations and descriptions that looked real but actually were not,” Cohen said. “Instead, I understood it to be a super-charged search engine and had repeatedly used it in other contexts to (successfully) find accurate information online.”
Furthermore, Cohen also blamed his lawyer and said that he never realised that the legal pro “would drop the cases into his submission wholesale without even confirming that they existed.”
The pitfalls of generative AI
Bard, while adept at generating human-like text, does not have the capacity to discern real from fictional legal precedents. As a result, it unknowingly created citations for cases that never existed, further muddying the already complex legal waters surrounding Cohen.
Also, this incident underscores the importance of individual user accountability when utilising AI tools. Users must be aware of the limitations of these technologies and exercise critical thinking when evaluating the information they provide. In Cohen’s case, a lack of due diligence and reliance on unverified AI-generated information resulted in a potentially damaging legal blunder.



[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here