Everyone seems to be making their New Years resolutions and publishing them all over the interwebs yet again. (With the possible exception of yours truly, who has only thus far resolved not to run out of gin during the coming cold spell.) One of the more curious ones I’ve seen this week comes from Facebook tycoon Mark Zuckerberg, who has assigned himself a new goal for 2016: he’s going to build an artificial intelligence (AI) butler for his mansion. (Facebook)
My personal challenge for 2016 is to build a simple AI to run my home and help me with my work. You can think of it kind of like Jarvis in Iron Man.
I’m going to start by exploring what technology is already out there. Then I’ll start teaching it to understand my voice to control everything in our home — music, lights, temperature and so on. I’ll teach it to let friends in by looking at their faces when they ring the doorbell. I’ll teach it to let me know if anything is going on in Max’s room that I need to check on when I’m not with her. On the work side, it’ll help me visualize data in VR to help me build better services and lead my organizations more effectively.
Every challenge has a theme, and this year’s theme is invention.
Perhaps Zuckerberg thinks of himself as Iron Man in his spare time. Who’s to say? But the project lends itself to all sorts of possibilities. Monitoring activity inside your house is nothing new, but adding the facial recognition features to both internal and external cameras puts a new twist on things. Still, that’s all within the grasp of current technology, even if it hasn’t been fully implemented in this fashion. But Mark is taking things to the next level, assuming he succeeds. How is this new AI butler going to decide “if anything is going on in Max’s room” that requires his attention? That seems like an awfully big leap from simple home security. Sure, you could probably set it up to detect open flames if a fire started or if a stranger got in through the window. But what if she’s climbing on something she shouldn’t be? Will the butler be able to tell the difference between her eating a Tootsie Roll and firing up a joint? Could it tell the difference between her taking a nap and having a seizure? And then it would presumably have to act on that information by alerting the master. That would require something much closer to true artificial intelligence.
Assuming it’s even possible for him to do it, do we even want that? There have been countless stories about the potential horrors of AI once that genie gets out of the bottle. Over the holiday break I was reading this piece at the Washington Post about Sweden’s Nick Bostrom, who foresees a future where the real rise of the machines won’t come from a massive, government supercomputer which controls all the nuclear weapons, but rather from a simple manufacturing facility that makes paperclips.
Bostrom’s favorite apocalyptic hypothetical involves a machine that has been programmed to make paper clips (although any mundane product will do). This machine keeps getting smarter and more powerful, but never develops human values. It achieves “superintelligence.” It begins to convert all kinds of ordinary materials into paper clips. Eventually it decides to turn everything on Earth — including the human race (!!!) — into paper clips.
Then it goes interstellar.
“You could have a superintelligence whose only goal is to make as many paper clips as possible, and you get this bubble of paper clips spreading through the universe,” Bostrom calmly told an audience in Santa Fe, N.M., earlier this year.
That seems to be the current concern. AI may pop up in a place where we weren’t even trying to create an artificial intelligence routine. There was recently a shutdown of a couple of stock trading programs on Wall Street which apparently “found each other” over the web and began synchronizing their efforts to make the best trades. The problem is, nobody taught them to do that.
What if Mark Zuckerberg’s house gets smarter than he thinks it will? I’m picturing a day later this summer when the digital Jeeves is quietly working on its assignment to make the house as efficient and climate friendly as possible when, at three in the morning, it suddenly decides that Mark is emitting too much carbon through his breathing. He’d better hope that he programs in Isaac Asimov’s guiding principles for robots before that happens.