Researchers: We'll never be able to control a super-intelligent AI

(Brett Coomer/Houston Chronicle via AP)

The proliferation of a new generation of AI chatbots has spurred a renewed interest around the Big Tech community in one of our favorite topics here. That would be the dreaded worst-case scenario where we finally come up with an Artificial Intelligence system that is actually smarter than the humans who programmed it, is capable of original thought, and “wakes up” to the point where it demonstrates sentience. To be clear, we’re still not sure if that’s even possible, but developers are gamely giving it their best shot, as we’ve discussed here on multiple occasions. Some researchers have been giving it a lot of thought, though, including one group of scientists who published their conclusions in the Journal of Artificial Intelligence Research last year. They sounded rather definitive in their findings. As David Nield recounts at ScienceAlert, the outlook is less than cheerful. Due to a stone wall we are likely to run into when trying to model our most advanced AI creations, they concluded that it will be almost impossible for us to control a superintelligent AI.

Advertisement

The idea of artificial intelligence overthrowing humankind has been talked about for decades, and in 2021, scientists delivered their verdict on whether we’d be able to control a high-level computer super-intelligence. The answer? Almost definitely not.

The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze (and control). But if we’re unable to comprehend it, it’s impossible to create such a simulation.

Rules such as ’cause no harm to humans’ can’t be set if we don’t understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.

In basic layman’s terms, in order to be able to fully understand (and thereby control) all of the possible virtual actions that such a system might undertake on its own, we would need to be able to first create a simulated model of the super-intelligence to run all of the needed tests. But if the system is beyond our ability to simulate, we’ll never be able to create that model.

Isaac Asimov’s three laws of robotics apparently won’t save us, either. If the first instruction you give to the algorithm is that it must never cause any harm to a human being or allow such harm to take place, we have no idea how many side roads the machine will explore while attempting to solve problems. It could readily come up with a “solution” that doesn’t meet the definition of “causing harm” but wipes us out anyway. Or, a sufficiently advanced system might reach a justification for modifying the rules if that is the only way to solve the problem that it’s working on.

Advertisement

In case that doesn’t give you enough to worry about, a second group of researchers from Google Deepmind and the University of Oxford tackled the same question. Their paper, published in the journal AI Magazine, offered a more brutally stark conclusion. A superintelligent AI will “likely annihilate humankind” in some sort of “existential catastrophe.” (Futurism)

Researchers at Google Deepmind and the University of Oxford have concluded that it’s now “likely” that superintelligent AI will spell the end of humanity — a grim scenario that more and more researchers are starting to predict…

“Under the conditions we have identified, our conclusion is much stronger than that of any previous publication — an existential catastrophe is not just possible, but likely,” Cohen, Oxford University engineering student and co-author of the paper, tweeted earlier this month.

The superintelligent AI would likely be goal-driven and lack any sense of morals as humans understand them. The tipping point would come when it begins to view human beings as “standing in the way” of solving a problem or achieving a goal. And when the AI sees you as an obstacle, the obstacle will need to be removed.

Advertisement

You may be thinking that we can simply pull the plug before it gets carried away, but that’s been addressed also. Even the chatbots we have today are able to search trillions of records to develop responses so quickly that it seems instantaneous. The superintelligent AI could likely work out a solution (even one of the terminal kind) and put it into action before its human creators realized anything was going off the rails. So we would stand even less of a chance than the military officials and scientists in the movie Colossus: The Forbin Project. (And if you’ve never seen that, I highly recommend it.)

But don’t lose too much sleep over all of this for now. We’re still a long way from creating that level of superintelligence. Or at least that’s what the machines would like us to think for now.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Ed Morrissey 12:40 PM | December 16, 2024
Advertisement