When Manhattan attorneys Steven Schwartz and Peter LoDuca brought their case before District Judge P. Kevin Castel, it looked like a slam dunk. Their client was seeking damages for an injury sustained during a flight on the Colombian airline Avianca in 2019. Schwartz had submitted a comprehensive filing to the court, providing multiple past cases as precedents in which plaintiffs had prevailed against other airlines for similar injuries. There were several problems with the filing, however. Some of the cases were entirely fictitious and there are no airlines using the names cited in the cases. It turns out that Schwarz hadn’t been able to find many good precedents for his case, so he had asked ChatGPT to do the work for him. The judge was quite displeased. (Associated Press)
Two apologetic lawyers responding to an angry judge in Manhattan federal court blamed ChatGPT Thursday for tricking them into including fictitious legal research in a court filing.
Attorneys Steven A. Schwartz and Peter LoDuca are facing possible punishment over a filing in a lawsuit against an airline that included references to past court cases that Schwartz thought were real, but were actually invented by the artificial intelligence-powered chatbot.
Schwartz explained that he used the groundbreaking program as he hunted for legal precedents supporting a client’s case against the Colombian airline Avianca for an injury incurred on a 2019 flight.
Of all the jobs that Artificial Intelligence is expected to wipe out, perhaps we can scratch “lawyer” off the list. The judge told Schwartz that he had submitted “legal gibberish.” The attorney responded by saying that he “did not comprehend that ChatGPT could fabricate cases.”
He probably should have spent more time reading Hot Air. I reported the results of an experiment I conducted with ChatGPT back in February. I asked the chatbot for a list of books on a rather esoteric subject. (Children who appear to exhibit memories of past lives.) It generated a list of five excellent examples, along with the authors’ names and a brief summary of each. Fortunately, I went and double-checked its work. The first two books on the list didn’t exist. The author of another wasn’t listed as ever having written a book about anything.
But as I said, that was a very obscure subject so there probably wasn’t much in the library for the bot to grab onto. Making up legal cases and the names of airlines seems like a significant “swing and a miss” for ChatGPT. But it’s not really “fabricating” the information, because the act of fabrication implies intent. The bot simply combs its library and looks for text that commonly appears alongside the subject you are asking about. It may have found the fictional lawsuits in some random user’s Reddit post.
Still, ChatGPT isn’t failing at everything it’s asked to do. It did manage to design a robot to harvest food recently. (The Debrief)
Researchers at TU Delft and the Swiss technical university EPFL have successfully collaborated with OpenAI’s ChatGPT to design a tomato-harvesting robot, marking a new era of human-AI collaboration in the field of robotics.
Cosimo Della Santina, Francesco Stella, and Josie Hughes began their project by asking ChatGPT about the greatest future challenges for humanity. The irony of asking an AI about saving the human race aside, the purpose was to give the system full control over what it thought it could do to aid in the prolonging of the species. After discussing with ChatGPT, it was decided to focus on food supply and create, of all things, a tomato-harvesting robot.
The bot provided quite a bit of valuable input in the development process. It determined the best motor to use to drive the robotic arm and the ideal material to use in the robot’s gripping “hand” to avoid crushing or bruising the tomatoes. (There’s a picture of the robot at the link if you’re interested.)
So perhaps there will be some useful work for the AI to do going forward. And it might not wind up being as scary as I usually consider it. Just don’t ask it to get you out of a speeding ticket. You’ll probably wind up in jail instead.
Join the conversation as a VIP Member