Our final invention: How the human race goes and gets itself killed
And here’s the kicker. Barrat argues that the time it will take for ASI to surpass human level intelligence, rendering us ant-like in comparison, could be a matter of days, if not mere hours, after it is created. Worse (it keeps getting worse), human researchers may not even know they have created this potent ASI until it is too late to attempt to contain it. An ASI birthed in a supercomputer may choose, Barrat writes, to hide itself and its capabilities lest the human masters it knows so much about it, attempt to shut it down. Then, it would silently replicate itself and spread. With no need to eat and sleep and with an intelligence that is constantly improving and war-gaming survival strategies, ASI could hide, wait and grow its capabilities while humanity plods along, blissfully unaware.
Though we have played a role in creating it, the intelligence we would be faced with would be completely alien. It would not be a human’s mind, with its experiences, emotions and logic, or lack thereof. We could not anticipate what ASI would do because we simply do not “think” like it would. In fact, we’ve already arrived at the alarming point where we do not understand what the machines we’ve created do. Barrat describes how the makers of Watson, IBM’s Jeopardy winning supercomputer, could not understand how the computer was arriving at its correct answers. Its behavior was unpredictable to its creators — and the mysterious Watson is not the only such inscrutable “black box” system in existence today, nor is it even a full-fledged AGI, let alone ASI.
Barrat grapples with two big questions in the book. The first is why an ASI necessarily leads to human extinction. Aren’t we programming it? Why couldn’t humanity leverage it, like we do any technology, to make our lives better? Wouldn’t we program in safeguards to prevent an “intelligence explosion” or, at a minimum, contain one when it bursts?
According to Barrat, the answer is almost certainly no.