Elon Musk knows about something more dangerous than Kim Jong-un's nukes

First we had “that guy” saying that Kim Jong-un was more reasonable than Donald Trump or whatever. And now we’ve got this guy saying that his weapons aren’t nearly as dangerous as the futuristic gun that we’ve got pointed at our own heads. In today’s episode, “this guy” happens to be Elon Musk. And what’s more dangerous than miniaturized nukes on top of an ICBM that might be able to reach the lower 48? You probably guessed it. Computers. (Fortune)


As many in the United States and abroad are watching as tensions grow with North Korea, Tesla founder and CEO Elon Musk issued a warning about artificial intelligence.

“If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,” Musk tweeted Friday, referencing Kim Jong Un’s threat of a missile strike on Guam…

Musk also warned that artificial intelligence should be regulated the same way anything that could pose a danger to the public is.

This is not the first time Musk has warned people about AI.

Since I happen to be one of the odd ducks who really thinks that Musk is onto something here (along with Hawking, Gates and all the rest), I was particularly taken with his meme about how the machines are going to win in the end. Here’s the Twitter version of both that and his call for (ugh) more regulation.

I write about this subject from time to time, but this one had a new wrinkle which I thought merited a quick look. Government regulation? Normally anathema to conservatives except where real security threats exist, but what’s a bigger existential threat than this? I mean, we’re only talking about The Forbin Project taking place in the real world and terminator robots sterilizing the planet for our own good, after all.


But there’s a flip side to that coin if you’re willing to follow me down the rabbit hole here for a moment. One of the major failures of government regulation is that we’ve traditionally seen that government bureaucrats are incredibly bad at regulating things they don’t understand nearly as well as profit-driven professionals in the private sector. When you put political “logic” ahead of scientific and practical reality, you’re almost certainly going to do more damage to the system you are trying to regulate than any perceived good which you intend to pass on to the people.

Who in the federal government is going to regulate artificial intelligence? I mean, these are the same jokers who can’t stop a bunch of teenage blackhats from hacking into the IRS, the Defense Department and their own respective parties’ email systems. And you want them to come up with a bunch of rules for AI? Sadly, the alternative is to turn the job over to the eggheads in the high tech community, but you won’t find much relief there either. The fact is that we don’t have any sort of unanimity among the design community as to how real the threat of self-aware AI actually may be and what should be done to prevent it from shutting down our civilization. The last time Musk raised this flag, Zuckerberg went after him with the opposite opinion.

But if we’re going to listen to anyone, it may as well be Musk. After all, it was his group which just this month unleashed a program capable of beating the best of the best humans in a really tricky video game.


Artificial intelligence took a step forward last night, at an annual tournament for players of the tactical wargame Defense of the Ancients 2. A bot created by the Elon Musk-backed nonprofit OpenAI defeated champion human player Danylo “Dendi” Ishutin in two back to back demonstration matches.

Musk hailed the achievement on Twitter, saying that it was a significant advance over what AI had accomplished in more traditional games.

Okay, so it’s still possible that AI will never take over the world and unleash Terminator style robots on all of us. But at a minimum, they’re going to be owning all of your gold in World of Warcraft by the end of the year.

Join the conversation as a VIP Member

Trending on HotAir Videos

Jazz Shaw 9:20 AM | February 29, 2024