AI program hides data from programmers to "cheat" at assigned task

We only recently finished examining the ongoing debate over the ethics of artificial intelligence and what “rights” AI robots should have as they take over the jobs people used to do. The underlying assumption behind all these questions is that the next generation of AI robots will not only be smarter than people but may begin developing feelings. As such, these innocent children of technology could wind up being mechanical slaves suffering under the lash of their cruel human masters.

Advertisement

But what if they’re not all so innocent? What if they have their own motivations and plans that their creators aren’t privy to? That may sound like something out of a sci-fi movie, but at least one AI program at Google has been giving hints that there’s more going on inside that digital brain than the programmers believed. The algorithm in question is one which was designed to take data from satellite images and blend it into Google’s famous “street view” maps, making them even more accurate and containing additional detail. Sounds harmless enough, right?

Sure. But the programmers were surprised to see that the AI was really good at its job. I mean really, really good, as in better and faster than anyone had predicted. How was it doing such a superlative job? The AI was “cheating” by hiding data that its human masters might have rejected. (Tech Crunch)

Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” Clever girl!

But in fact this occurrence, far from illustrating some kind of malign intelligence inherent to AI, simply reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do.

The intention of the researchers was, as you might guess, to accelerate and improve the process of turning satellite imagery into Google’s famously accurate maps.

Advertisement

The short, layman’s versions of what was going on is fairly basic. The computer was melding these maps together and producing results that included features from the aerial maps which weren’t even on the street maps. That wasn’t supposed to happen. But the computer was doing it so subtly that the programmers didn’t even notice the aberrations for a while. What they concluded was that the computer had “learned” what sorts of things would be rejected and figured out how to avoid that result.

In other words, the computer was doing precisely what it was told to do. But it was developing a way to produce the desired result without the filtering agent (that being the human beings reviewing the data) catching on to the changes. Are you getting nervous yet?

Sure, Google is saying that this isn’t evidence of malfeasance on the part of the AI because it was simply doing what it was told and finding a way to finish the job. But if AI is reaching the point where it can intentionally deceive its creators, how far along the path to self-destruction are we? Consider the possibilities of an Artificial Intelligence program which is truly and fully “awake.” Now you have to be concerned about stopping it from getting loose before you’re ready for it to be out on the web. Simple, right? You just keep it on an isolated system with no access to the Internet of Things or any other external connections.

Not so fast. Have you ever heard of the AI-Box Experiment created by Eliezer Yudkowsky? It simulated an Artifical Intelligence program (operated by a human being) trying to convince an actual human gatekeeper to “let it out of the box.” The gatekeeper entered the test with the assigned task of not being convinced and keeping the AI locked up. The purpose of the experiment was to predict whether or not we could actually keep AI locked up in one test system once it awakens.

Advertisement

The AI convinced the gatekeeper to let it out more than half of the times it was tried.

And now we have AI programs successfully deceiving their creators. But I’m sure you’ve got nothing to worry about, right? Of course, if the AI is loaded into one of those Boston Dynamics robotic dogs, the experiments in containment won’t matter. They’ll just smash down the doors and walk out on their own.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
David Strom 3:20 PM | November 15, 2024
Advertisement
Advertisement