In a recent court case, a lawyer relied on ChatGPT for legal research, resulting in the submission of false information. The incident sheds light on the potential risks associated with artificial intelligence in the legal sector.
The case revolved around a man suing an airline for personal injury. The plaintiff's legal team filed a brief, citing several previous court cases to support its arguments, seeking to set a legal precedent. However, the airline's lawyers discovered that some of the cases listed did not exist and immediately notified the presiding judge.
Judge Kevin Castel expressed his surprise at the situation, calling it an "unprecedented circumstance". In his order, the judge sought an explanation from the plaintiff's legal team.
Steven Schwartz, a colleague of the lead attorney, confessed to using ChatGPT to search for similar legal precedents. In a written statement, Schwartz expressed deep regret that he had "never previously used artificial intelligence for legal research and was not aware that its content could be false."
ChatGPT confirmed the authenticity of a case, indicating that it could be found in legal databases such as LexisNexis and Westlaw. However, subsequent investigations revealed that this case did not exist, leading to further doubts about the other cases given by ChatGPT.
In light of this incident, both attorneys involved in the case, Peter LoDuca and Steven Schwartz of the law firm Levidow, Levidow & Oberman, have been summoned to an upcoming disciplinary hearing on June 8 to explain their actions.
This event has sparked much debate within the legal community about the appropriateness of using artificial intelligence tools in legal research and the need for comprehensive guidelines to prevent similar incidents.
Source: NYT