How humans will lose control of artificial intelligence

Some of those problems are already in front of us, and we might miss them if we’re looking for a Skynet-style takeover by hyper-intelligent machines. “While you’re focused on this, a bunch of small things go unnoticed,” says Dr. Finale Doshi-Velez, an assistant professor of computer science at Harvard, whose core research includes machine learning. Instead of trying to prepare for a superintelligence, Doshi-Velez is looking at what’s already happening with our comparatively rudimentary A.I.

Advertisement

She’s focusing on “large-area effects,” the unnoticed flaws in our systems that can do massive damage — damage that’s often unnoticed until after the fact. “If you were building a bridge and you screw up and it collapses, that’s a tragedy. But it affects a relatively small number of people,” she says. “What’s different about A.I. is that some mistake or unintended consequence can affect hundreds or thousands or millions easily.”

Take the recent rise of so-called “fake news.” What caught many by surprise should have been completely predictable: When the web became a place to make money, algorithms were built to maximize money-making. The ease of news production and consumption — heightened with the proliferation of the smartphone — forced writers and editors to fight for audience clicks by delivering articles optimized to trick search engine algorithms into placing them high on search results. The ease of sharing stories and erasure of gatekeepers allowed audiences to self-segregate, which then penalized nuanced conversation. Truth and complexity lost out to shareability and making readers feel comfortable (Facebook’s driving ethos).

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement