Can ChatGPT be sued for libel when it's making stuff up?

Some law professor colleagues and I are writing about whether Large Language Model creators (e.g., OpenAI, the creator of ChatGPT-4) could be sued for libel. And some recent stories allege that OpenAI does yield false and defamatory statements; Ted Rall wrote an article so alleging yesterday at the Wall Street Journal, and another site published something last Sunday about this as well (though there the apparently false statement was about a dead person, so it’s not technically libel). When I tried to ask the same questions those authors reported having asked, ChatGPT-4 gave different answers, but that’s apparently normal for ChatGPT-4.

Advertisement

This morning, though, I tried this myself, and I saw not just what appear to be false accusations, but what appear to be spurious quotes, attributed to media sources such as Reuters and the Washington Post. I appreciate that Large Language Models just combine words from sources in the training data, and perhaps this one just assembled such words together with punctuation (quotation marks). But I would have thought that its creators would have programmed something to check its output, to confirm that anything reported in quotation marks is actually a legit quote. In the absence of such quotes, it appears that such AI tools might produce material that is especially likely to deceive viewers (as, say, a fake quote attributed to Reuters might), and is especially likely to damage the reputations of the subjects of the quotes.

[Wow. That’s scary af. It’s not like simple lies don’t take two seconds to fly around the world and stick, then here comes AI with all the legal jargon attached. Good luck fighting that! ~ Beege]

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement