In a recent court case, a lawyer relied on ChatGPT for legal research, resulting in the submission of false information. The incident sheds light on the potential risks associated with artificial intelligence in the legal sector.
The case revolved around a man suing an airline for personal injury. The plaintiff's legal team filed a brief, citing several previous court cases to support its arguments, seeking to set a legal precedent. However, the airline's lawyers discovered that some of the cases listed did not exist and immediately notified the presiding judge.
Ο δικαστής Kevin Castel, εξέφρασε την έκπληξή του για την κατάσταση, χαρακτηρίζοντάς την "πρωτοφανή περίσταση". Με εντολή του ο δικαστής ζήτησε εξηγήσεις από τη νομική ομάδα του ενάγοντα.
Ο Steven Schwartz, ένας συνάδελφος του κύριου δικηγόρου, ομολόγησε ότι χρησιμοποίησε το ChatGPT για να αναζητήσει παρόμοια νομικά προηγούμενα. Σε γραπτή δήλωση, ο Schwartz εξέφρασε τη βαθιά λύπη του, ότι "δεν είχε χρησιμοποιήσει ποτέ στο παρελθόν την τεχνητή νοημοσύνη για νομική έρευνα και δεν γνώριζε ότι το περιεχόμενό του θα μπορούσε να είναι ψευδές".
ChatGPT confirmed the authenticity of a case, indicating that it could be found in legal fundamentals data such as LexisNexis and Westlaw. However, subsequent investigations revealed that this case did not exist, leading to further doubts about the other cases given by ChatGPT.
In light of this incident, both attorneys involved in the case, Peter LoDuca and Steven Schwartz of the law firm Levidow, Levidow & Oberman, have been summoned to an upcoming disciplinary hearing on June 8 to explain their actions.
This event has sparked much debate within the legal community about the appropriateness of using artificial intelligence tools in legal research and the need for comprehensive guidelines to prevent similar incidents.
Source: NYT