Majority Believe AI Will Spread Election Misinformation in 2024

AP Photo/Richard Drew, File

New polling from the AP and NORC confirms that people are growing increasingly concerned about the dangers posed by Artificial Intelligence. These worries cover many areas, but voters are specifically worried about the impact of AI on the upcoming elections. A solid majority of poll respondents said they are concerned that AI will be used to “spread false or misleading information” during the election, potentially affecting the outcome. At the same time, the poll revealed that while most people have concerns about AI, not all that many of them have used it or really know much about it.

Advertisement

The warnings have grown louder and more urgent as 2024 approaches: The rapid advance of artificial intelligence tools threatens to amplify misinformation in next year’s presidential election at a scale never seen before.

Most adults in the U.S. feel the same way, according to a new poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy.

The poll found that nearly 6 in 10 adults (58%) think AI tools — which can micro-target political audiences, mass produce persuasive messages, and generate realistic fake images and videos in seconds — will increase the spread of false and misleading information during next year’s elections.

There seems to be a bipartisan consensus growing around this topic. Only six percent of respondents felt that AI would “decrease” the amount of political misinformation online and roughly one-third said it wouldn’t make much of a difference. But as noted above, not very many people have delved into the subject deeply. Less than one-third said they have used AI chatbots or image generators. Fewer than half (46%) said they have even heard or read anything about chatbots or these other AI tools. One wonders how they came to develop such strong opinions while barely being aware that AI exists.

Personally, I get nervous whenever I see the words “misinformation” or “disinformation” being tossed around, particularly when it comes to political discourse. Up until now, those were code words for “things the government doesn’t want you talking about.” Suggestions that the new  COVID vaccines couldn’t prevent you from catching or transmitting the disease were long labeled “misinformation” and still are by some people. Other examples abound.

Advertisement

But those were just opinions or bits of research that were discovered and shared by human beings. (Sometimes amplified by bots, but that’s a different issue.) AI takes us into uncharted waters. The new chatbots and generators of images and video have already been put to malign purposes. We recently discussed how AI has already shown up in child pornography, leading to harm to young girls.

As the linked report reminds us, Ron DeSantis’ campaign used AI to create a deepfake video making it appear that Donald Trump was hugging Anthony Fauci. Someone else used the technology to create what appeared to be a photograph of Trump being arrested. That too was fake. I’ve been fooled by some of these images and I work with AI on a regular basis and follow this news closely. It’s not hard to imagine that people who don’t care very much about this subject could be easily drawn in.

We’ve already been assured by the White House that Joe Biden is very “interested in” but also alarmed by Artificial Intelligence. He claims to be trying to do something about it, but it’s unclear what, if anything can be done. The chatbots and image generators are already out there in the wild. Even if you could convince OpenAI and Microsoft to pull all of the tools down, people have already been stealing the code and duplicating it.

At least when it comes to political discourse, the best we may be able to do is simply try to educate people about AI and warn them about accepting realistic-looking videos, images, and articles at face value without looking into the background more deeply. And the mainstream media needs to do a better job of vetting these materials because some outlets have been fooled already. I realize that’s not a very good answer, but it may be all we have. The toothpaste isn’t going to be going back into the AI tube.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Beege Welborn 5:00 PM | December 24, 2024
Advertisement
Advertisement