Here's how we'll know if an AI is conscious

Of course, it’s possible that an artificial consciousness might possess qualia vastly different than our own. In this scenario, questions about specific qualia, such as color qualia, might not click with the AI. But more abstract questions about qualia themselves should filter out zombies. For this reason, the best question of all would likely be that of the hard problem itself: Why does consciousness even exist? Why do you experience qualia while processing input from the world around you? If this question makes any sense to the AI, then we’ve likely found artificial consciousness. But if the AI clearly doesn’t understand concepts such as “consciousness” and “qualia,” then evidence for an inner mental life is lacking.

Advertisement

Building a consciousness detector is no small undertaking. Alongside such a Turing test, tomorrow’s researchers will likely apply today’s abstract theories of consciousness in an effort to infer the existence of consciousness from a computer’s wiring diagram. One such theory considers the amount of information integrated by a brain or other system, and is already being applied to infer the existence of consciousness in brain-injured patients and even schools of fish. Indeed, before the motivation to detect artificial consciousness garners substantial funding for such research, the need to detect consciousness in brain-injured patients has already erased the C-word from science’s taboo list.

My own lab, led by Martin Monti at the University of California, Los Angeles, strives to improve the lives of brain-injured patients by developing better means of inferring consciousness from electrical or metabolic brain activity. Just as ethical tragedies arise when we pull the plug on patients who are aware yet unresponsive, similar tragedies will arise if we pull the plug on artificial consciousness.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement