Summary
Since the release of ChatGPT last year, there has been a constant stream of headlines about the chatbot. Many people have turned to the chatbot to help with their jobs. However, one law firm recently discovered that it can be dangerous to fully trust AI to do work for you.
Listen
Links
Transcript
Since the release of ChatGPT last November, the chatbot has produced a constant stream of headlines. Now people in various professions use it to help them with their jobs. However, one lawyer recently learned you should not trust ChatGPT to do your job for you.
For Personal Tech Media, this is Two Minute Tech. I’m Jim Herman.
A motion recently submitted in the US District Court in New York created a unique circumstance for the judge. In responding to a motion to dismiss the case, the attorney for the plaintiff referenced multiple cases that did not exist. Furthermore, the affidavit submitted not only referenced these cases, but provided a series of quotes from at least one of the decisions.
When asked to explain why the motion included these references, the attorneys involved pointed the finger at ChatGPT. The attorney that conducted research stated he consulted with the AI chatbot to assist his study. He said he had never used ChatGPT before, and he was unaware that it could be unreliable.
The judge, however, was not swayed by the attorney’s claims of ignorance and ordered both of them and their law firm to appear in court over the fraudulent citations.
Artificial intelligence can be beneficial for many jobs, especially those that involve a substantial amount of research. I’ve used AI to help with research for this podcast. However, you should not trust it to do your job for you. Whether it’s writing code, doing podcast research, or filing legal motions, make sure that you thoroughly vet anything that AI generates before you use it.