"Experts" declare that Artificial Intelligence is no threat to humanity

(AP Photo/Pavel Golovkin)

One of my favorite movies from the 70s is Colossus: The Forbin Project. In it, American scientists build a gargantuan supercomputer named Colossus which possesses artificial intelligence. Its purpose is to control all of the United States’ nuclear weapons and keep us safe from the Soviet Union. Almost immediately, Colossus hooks up with a similar computer system in Russia and proceeds to subjugate the human race. Similar movies have appeared over the years, including the Terminator series. But how much should people really be concerned about the idea of AI “waking up” one of these days and enslaving or eliminating mankind? The New York Post dives into the topic this week, interviewing a number of scientists working in the field. Their conclusion? We really don’t have much to worry about… at least not yet. But the day could be coming when we will need to.

Advertisement

Many fear that artificial intelligence will be the end of humankind – here’s the truth according to experts…

This apocalyptic ideal has been perpetuated through various texts and movies over the years.

Even staple figures in the field of science such as Stephen Hawking and Elon Musk have been vocal about technology’s threat against humanity.

In 2020, Musk told the New York Times that AI would grow vastly smarter than humans and would overtake the human race by 2025, adding that things would get “unstable or weird.”

The Post presents the views of several computer scientists working in the field. The majority opinion is that the current generation of Artificial Intelligence systems is almost entirely comprised of the “narrow” or “weak” varieties. These are limited to the performance of one or, at most, a few basic tasks. In order to begin approaching Terminator levels of AI, we would need to achieve what is referred to as “General Artificial Intelligence” or “artificial superintelligence” (ASI). Those are the levels of machine intelligence that would be capable of doing things the program was not specifically coded to do.

The problem, according to most scientists interviewed for this report, is that we’re nowhere near being able to design an ASI system and we don’t really even know what steps it would take to get there, assuming it’s even possible. To do so, we would need to be able to design “consciousness” into the algorithm, and the reality is that we’re still not really even sure how human consciousness works.

Advertisement

As someone who has been fascinated by this topic for many years, that sounds like a fair assessment to me, if an unexciting one for science fiction fans. No matter how complex the tasks we put in front of these AI systems or how many possibilities they have to contemplate to complete them, they all have one thing in common. They are only taking in data from the world around them and processing it in the specific ways they have been coded to employ. There has yet to be any proof of an AI system ever having created an original thought of its own.

Of course, the moment one of them does (assuming it happens), we could be very close to end-game territory. But not every scientist agrees that the prospect is far in the future. Just this week, one group in Great Britain declared that some Artificial Intelligence systems are already at least partially aware. And others may have “awoken” without us realizing it. So the debate continues.

But the threat to human survival from AI may not require a system to “awaken” at all. The more realistic and frightening possibility is pointed out by Ben Nye, the Director of Learning Sciences at the University of Southern California, Institute for Creative Technologies (USC-ICT). AI could be weaponized by a bad actor. “If the AI is explicitly designed to kill or destabilize nations…accidental or test releases of a weaponized, viral AI could easily be one of the next significant Manhattan Project scenarios,” he said.

Advertisement

That’s a terrifying prospect I hadn’t previously considered. Most analysts have relied on the proposition that AI would only become dangerous if it developed some level of consciousness and began doing things it wasn’t programmed to do. But what if some terror group were to gain access to the technology and intentionally program it to find ways to wipe out the human race. Terrorists eventually wind up coopting most forms of military technology. Is it really so crazy to believe that they might convince an AI engineering team to go to work for them?

With that soothing notion to help you sleep better tonight, I’ll include the trailer for Colossus for you here. If you’ve never seen it, check it out. I highly recommend this film.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
David Strom 11:20 AM | November 21, 2024
Advertisement
Ed Morrissey 10:00 PM | November 20, 2024
Advertisement
Advertisement