Readers beware: AI has learned to create fake news stories

As far as experts know, the technology has been implemented only by researchers, and it hasn’t been used maliciously. What’s more, it has limitations that keep the stories from seeming too believable.

Advertisement

But many of the researchers who developed the technology, and people who have studied it, fear that as such tools get more advanced, they could spread misinformation or advance a political agenda. That’s why some are sounding the alarm about the risks of computer-generated articles—and releasing tools that let people ferret out potentially fake stories.

“The danger is when there is already a lot of similar propaganda written by humans from which these neural language models can learn to generate similar articles,” says Yejin Choi, an associate professor at the University of Washington, a researcher at the Allen Institute for Artificial Intelligence and part of a team that developed a fake-news tool. “The quality of such neural fake news can look quite convincing to humans.”

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement