How a Super AI might destroy humanity

An AI programmed with a predetermined set of moral considerations may avoid certain pitfalls, but as Yudkowski points out, it’ll be next to impossible for us to predict all possible pathways that an intelligence could follow. A possible solution to the control problem is to imbue an artificial superintelligence with human-compatible moral codes. If we could pull this off, a powerful machine would refrain from causing harm or going about its business in a way that violates our moral and ethical sensibilities, according to this line of thinking. The problem, as Schneider pointed out, is that in order for us “to program in a moral code, we need a good moral theory, but there’s a good deal of disagreement as to this in the field of ethics,” she said. Good point. Humanity has never produced a common moral code that everyone can agree on. And as anyone with even a rudimentary understanding of the Trolley Problem can tell you, ethics can get super complicated in a hurry. This idea—that we can make superintelligence safe or controllable by teaching it human morality—is probably not going to work.
Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement