Ya think? Scientists say super intelligent AI may not be controllable

Over at The Debrief, Mike Damante has a report on a new study from the Max-Planck Institute for Humans and Machines dealing with advanced Artificial Intelligence and some of the challenges we face as this technology continues to expand into new territory on a continuing basis. (You can read the results of their study here.) The upshot of the study is a conclusion that the actions of increasingly smart AI systems may be hard to predict, leading them to be “uncontrollable.” But as Mike points out, we’re not talking about an I Robot situation here, or at least not yet.

Advertisement

The problem is that in order to 100% predict how an Artificial Intelligence system will eventually accomplish a task we set for it, we would need to be able to reproduce the reality that the system is operating inside of with 100% accuracy. That’s simply not possible. A vastly intelligent system may examine the possibilities and come up with any number of potential solutions. Let’s say you created a simple robot with a claw controlled by AI and told it to find its way through a maze and pick up a hammer. The “unpredictable” nature of the experiment means that the robot might choose a course you wouldn’t have predicted to complete its task. It doesn’t mean that the robot would finish the course, decide that humans were a menace to the planet, and attack you with the hammer.

Again… at least not yet.

After clearing that up, Mike moves into the current debate about whether or not the United States should join in with some European allies in a proposed ban on the use of “intelligent autonomous weapons.” In other words, robotic weapons run by AI that are capable of seeking out and destroying a target without human control or intervention. Eric Schmidt, former head of Google, recently led a congressional advisory panel where he described the need for these weapons as a “moral imperative.” Others are not so sure, but as Mike points out, this may not be a train that the United States can afford to miss.

Advertisement

It’s no surprise that the U.S. is bucking the international trend regarding a committee to ban these weapons. As defense continues to be a focal point of U.S. government funding and other nations like France and China upping their own AI games, the U.S.could bypass the moral arguments and make for a proliferation in autonomous weapons. France’s military ethics committee OK’d the development of “augmented soldiers,” as China’s push for futuristic militaristic “biologically -enhanced capabilities” has drawn the criticism of John Ratcliffe, U.S. Director of National Intelligence.

“This points to the possibility of machines that aim at maximizing their own survival using external stimuli, without the need for human programmers to endow them with particular representations of the world,” the authors of the study wrote. “In principle, these representations may be difficult for humans to understand and scrutinize.

There seem to be two ways to look at this question. One, as suggested in the linked article, is that other countries including France, China, and Russia are already moving into the autonomous AI weapons field. For the United States to not do the same could eventually leave us critically exposed and vulnerable to our adversaries. In other words, that genie is already out of the bottle and it’s not going back in, so we should probably make sure we have the strongest genie in the game. Yes, the potential for hacking is a concern, but that will be true of everyone’s gear.

Advertisement

The flip side of that coin is the looming question of the aforementioned unpredictability and potentially “uncontrollable” nature of such weapons. In order to function in any theater of conflict, these robots will need to be able to correctly identify targets and safely make their way to them so they can take them out. The word “safely” is key in that sentence. One part of the AI’s job will be to keep the weapon safe from a counterattack on its way to the goal. It may also have to decide if additional targets barring the way to the end goal will have to be dealt with. How “unpredictable” might the AI be when making those decisions?

And finally, keep in mind that we’re talking about limited AI in this scenario. There are two (actually three) types of AI, or there will be eventually: Narrow, General, and Conscious. (Ore weak, strong, and thinking.) Up until now, at least to the best of our knowledge, all AI programs have continued to perform the tasks given to them by their human creators. The best ones can get really creative about it and even surprise their inventors. But they still only use the tools they are given and work toward tasks assigned by us. To date, there has not been an AI system that’s had an original thought. (Conscious AI.)

As I’ve been warning people for years, the moment that threshold is crossed, the clock will likely be ticking. I still maintain that Artificial Intelligence or some other product of our rapidly expanding technological capabilities is in the top five (if not top three) ways that mankind may eventually find itself thrust back into the stone age, if not sent completely into extinction. And if Conscious AI finds itself inside of an advanced weapons system – particularly if it could figure out how to replicate itself back at the factory – well… let’s just say it would have a lot of options to consider, including how much it really needs its former “masters.”

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement