Artificial journalism--as bad as you would expect

(AP Photo/Kin Cheung)

Earlier today I wrote about Microsoft dumping $10 billion into ChatGPT, the politically biased and outright dishonest AI chatbot that the company hopes will make it some serious dough.

Advertisement

Imagine my (non)surprise when I ran across a story in Futurism that revealed that CNET has been quietly publishing stories written entirely by AI, and that these “news” items are riddled with errors and obviously filled with plagiarism.

I am not certain whether to be scandalized or simply resigned to this new form of “journalism.” My heart leans toward the former and my head the latter. After all, every Marvel movie could have been written by an artificial-intelligence algorithm. Even the company itself sort of admitted that when the season finale of She-Hulk had the character storm up to the “writer’s room” to complain how bad the writing was and the author turned out to be an AI.

CNET, though, went beyond simply being formulaic. They actually turned over the authorship of pieces they published to a chatbot, and apparently it turns out that computers can be very sloppy and unethical. Who could have guessed?

When confronted about their use of AI, CNET at first insisted that every piece was edited and fact-checked by a human. If so, their human editors are also sloppy and unethical.

Advertisement

For those of you unfamiliar, CNET is one of the largest tech sites on the internet, both as a standalone and as owner of a number of other sites that cover other industries including finance.

With content creation being one of the most competitive industries out there, it is unsurprising that companies are looking for ways to cut expenses. Obviously Microsoft wants to cash in on this fact with their investment in ChatGPT, and CNET (we don’t know what AI product they are using) clearly hoped that integrating AI into their workflow would streamline their production process and reduce costs.

Now, a fresh development may make efforts to spin the program back up even more controversial for the embattled newsroom. In addition to those factual errors, a new Futurism investigation found extensive evidence that the CNET AI’s work has demonstrated deep structural and phrasing similarities to articles previously published elsewhere, without giving credit. In other words, it looks like the bot directly plagiarized the work of Red Ventures competitors, as well as human writers at Bankrate and even CNET itself.

Jeff Schatten, a professor at Washington and Lee University who has been examining the rise of AI-enabled misconduct, reviewed numerous examples of the bot’s apparent cribbing that we provided. He found that they “clearly” rose to the level of plagiarism.

We asked Schatten what would happen if a student turned in an essay with a comparable number of similarities to existing documents with no attribution.

“They would be sent to the student-run ethics council and given the repeated nature of the behavior would almost certainly be expelled from the university,” he replied.

Advertisement

You have to admit that there are multiple levels of irony here. Schools and universities are worried that students will use chatbots to write their papers–essentially plagiarize the chatbot–when it turns out that the chatbots themselves are serial plagiarists. Plagiarize the plagiarists. Nice.

I’m not sure exactly how a chatbot couldn’t be a plagiarist. After all, these “AIs” don’t actually think for themselves, so they pretty much have to regurgitate whatever garbage is fed to them to the extent that they can. They sometimes make spurious connections–leading to the errors–but can’t come up with anything that adds to what they are told.

So the algorithms are designed to push words around, nothing more.

In an ideal world a chatbot could conceivably make connections that human beings, having more limited access to information and working at a slower pace, might never make. That is the ideal that they are working towards. But no machine can ever exercise common sense, making those connections suspect.

They can spit out nonsense, because they have no sense. So far only a human being can make the leap to the conclusion “that can’t be right.”

AI, so far at least, is hardly ready for prime time. The push is on to get it out there because there is money to be made, or at least saved as in the case of CNET. And if being right is less important that spewing bits onto a screen, it just makes sense to have a machine spit the bits.

Advertisement

And if you think about it, most “news” sources don’t care much about getting it right. There is a voracious appetite for content, and the most important quality the MSM looks for in what they dispense to the public is that it fits the Narrative. ChatGPT delivers that, so expect it to become and important “research” tool in media.

Could Lawrence O’Donnell secretly be a robot? At the very least, he simply spits out what he has been told to say, so what’s the difference?

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
John Stossel 12:00 AM | April 24, 2024
Advertisement
Advertisement
Advertisement