Two Journalists Sue ChatGPT Over Plagiarism

AP Photo/Michael Dwyer

For those of us who regularly interrogate ChatGPT and the other popular Artificial Intelligence chatbots, it's easy to be amazed by the breadth of topics that the systems are able to draw upon. While recently working on an article for another outlet on the topic of Out-of-Place Artifacts (Ooparts), ChatGPT rapidly fed me obscure information that I could only verify by dredging up newspaper clippings from the 1860s. But where do these AI systems get all of this data? The secret is found in the massive data libraries that the bots are "trained" with. But all of that information was originally generated by human beings, none of whom are given credit by the bots when they regurgitate the information. Two veteran journalists from Massachusetts recently began tinkering with ChatGPT and discovered that their decades of work, including multiple books that they had published were being mined in this fashion without any credit being given to them. They enlisted the help of a relative to bring a lawsuit against OpenAI claiming plagiarism and copyright infringement. Such suits are common if another writer steals your work without attribution, but can someone sue a computer algorithm? (Associated Press)

Advertisement

When two octogenarian buddies named Nick discovered that ChatGPT might be stealing and repurposing a lifetime of their work, they tapped a son-in-law to sue the companies behind the artificial intelligence chatbot.

Veteran journalists Nicholas Gage, 84, and Nicholas Basbanes, 81, who live near each other in the same Massachusetts town, each devoted decades to reporting, writing and book authorship.

Gage poured his tragic family story and search for the truth about his mother’s death into a bestselling memoir that led John Malkovich to play him in the 1985 film “Eleni.” Basbanes transitioned his skills as a daily newspaper reporter into writing widely-read books about literary culture.

This lawsuit is only one of many that have already been filed by authors and journalists. The list of plaintiffs is already lengthy, including fiction authors ranging from John Grisham to George R. R. Martin and media outlets including the New York Times and the Chicago Tribune. All of their work has been fed into the chatbots' data libraries and is regularly cobbled together to fulfill user requests without any mention of the original sources being drawn upon.

There are a number of problems with these claims of a lack of attribution, however. First of all, that's not entirely true. It requires some extra clicking, but some (not all) of the answers that ChatGPT delivers are accompanied by a tag that says "searched five sources" or however many it drew upon. If you click on that it will display the names and dates of the sources. There is another button labeled "Get citations" that can produce similar results. So in reality, at least some results are attributed.

Advertisement

It's also worth noting that most plagiarism cases are brought on behalf of one (human) author or publishing entity against another writer who purloined their work, presumably for a profit of some sort, without compensating them. ChatGPT is an algorithm, not a person. It has no assets of its own to seize. OpenAI offers the public version of its products for free. (I do pay a small fee to have access to the latest beta versions when they are released, but it's minimal.) They make their money selling the underlying technology to other companies who incorporate it into their own products. How are the courts supposed to identify who the actual "thief" is in these cases and force them to pay up?

If these lawsuits somehow wind up being successful, that could eventually bring about the end of this generation of chatbots. Without their data libraries, they would be useless, empty shells of software. We can debate whether that would be a net benefit to mankind at a later date, but it would represent a massive blow to artificial intelligence in general. I would suggest that a better target for such lawsuits would be the users of ChatGPT if they publish "new" content based on their AI search results without attributing the original sources. After all, prior to the advent of AI, anyone could go to a library to do research on a subject and then write about it. That's how research was always done in the past. But if you directly quote someone else's original work without attribution, you can land in hot water. In this scenario, it's easier to think of ChatGPT as the library, not the author. We will need to wait and see how the ongoing lawsuits play out, but this strikes me as an unproductive line of attack for now.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Ed Morrissey 10:00 PM | August 26, 2024
Advertisement
Advertisement