Some law professor colleagues and I are writing about whether Large Language Model creators (e.g., OpenAI, the creator of ChatGPT-4) could be sued for libel. And some recent stories allege that OpenAI does yield false and defamatory statements; Ted Rall wrote an article so alleging yesterday at the Wall Street Journal, and another site published something last Sunday about this as well (though there the apparently false statement was about a dead person, so it’s not technically libel). When I tried to ask the same questions those authors reported having asked, ChatGPT-4 gave different answers, but that’s apparently normal for ChatGPT-4.
This morning, though, I tried this myself, and I saw not just what appear to be false accusations, but what appear to be spurious quotes, attributed to media sources such as Reuters and the Washington Post. I appreciate that Large Language Models just combine words from sources in the training data, and perhaps this one just assembled such words together with punctuation (quotation marks). But I would have thought that its creators would have programmed something to check its output, to confirm that anything reported in quotation marks is actually a legit quote.
Join the conversation as a VIP Member