Despite AI’s promise, certain thinkers are deeply concerned about a time when machines might become fully sentient, rational agents—beings with emotions, consciousness, and self-awareness. “The development of full artificial intelligence could spell the end of the human race,” Stephen Hawking told the BBC in 2014. “Once humans develop artificial intelligence, it would take off on its own, and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
This explosion of artificial intelligence—often referred to as the singularity—is one of many futures technologists have envisioned for robots, not all so apocalyptic. But the possibility of any threat to humans, even if small, is real enough that some are advocating for precautionary measures. More than 8,000 people, including Hawking, Noam Chomsky, and Elon Musk, have signed an open letter warning against potential “pitfalls” of AI development. Ryan Calo, a Washington University law professor, argues for the development of a Federal Robotics Commission to monitor and regulate developments so that we don’t innovate irresponsibly.
While concerns mostly center on economics, government, and ethics, there’s also “a spiritual dimension to what we’re making,” Kelly argues. “If you create other things that think for themselves, a serious theological disruption will occur.”
Join the conversation as a VIP Member