The high court of England and Wales says lawyers need to make stronger steps to prevent misuse of artificial intelligence in their work.
on A Ruling Counting two new cases, Judge Victoria Shars wrote that Generative AI items such as Chatgpt “cannot afford the trustworthy legal research.”
“Such tools can be hidden together and have significant prompt answers, but those who are united and can cause answers may not be completely incorrect,” judged by the wrong one. “Answers can make confident expressions that are not true.”
That does not mean lawyers cannot use AI in their research, but he says there is a professional duty “to examine the accuracy of research sources by discussing the flow of their professional work.”
Judge Shars suggested that the increased number of cases where lawyers (including, on the US side, Lawyers who represent large AI platforms) Informs what appears to be lies that have been AI
In one of the cases questioned, a lawyer who represents a person who is seeking injuries against the two banks that are not related to the application, “the judge who is dedicated to the application,” Judge the application subject.
In another, a lawyer who represents a person expelled from his London house writes a charged court filing in five cases not found. .
“Lawyers who did not follow their professional obligations with this respect for the risk of severe punishment,” he added.
Both lawyers mean or referring to themselves with professional regulators. Judge mentioned that if lawyers did not meet their court duties, court powers from “public governments, or even” police referrals. “