ChatGPT is smart enough to figure out who the oldest and youngest governors of South Dakota are and write standard resumes of their careers. It knows how to do legal research and understands what kinds of cases would be relevant in a brief. It knows how to write something that reads more or less like a court decision, and to include within that decision citations to cases that on their face seem to support the brief’s argument. But instead of carrying out these functions with greater or lesser skill, as one would expect, the program makes stuff up–stuff that satisfies the instructions that ChatGPT has been given, or would, anyway, if it were not fictitious.
Presumably the people who developed ChatGPT didn’t program it to lie. So why does it do so? You might imagine that, in the case of the legal brief, ChatGPT couldn’t find real cases that supported the lawyers’ position, and therefore resorted to creating fake cases out of desperation. That would be bizarre enough. But in the case of the South Dakota governors, there was no difficulty in figuring out who the oldest and youngest governors were. ChatGPT could easily have plugged in a mini-biography of Richard Kneip. But instead, it invented an entirely fictitious person–Crawford H. “Chet” Taylor.
Join the conversation as a VIP Member