When are the killer robots coming? They're already here

Normally, when you see a headline here having to do with killer robots we’re talking about those dog-like monstrosities roaming the basement at Boston Dynamics and Artificial Intelligence programs run amok. While those stories are only half tongue in cheek, others in the tech community have even more pressing concerns on their minds. These deal with the current state of military technology and the fact that we literally do have autonomous machines out there programmed with the ability to kill without checking in with a human creator first.

Advertisement

What would happen if some of them got out of control? Well, they’re a lot more dangerous than a Roomba mapping the interior of your apartment while you’re at work. That’s the premise of Louis A. Del, writing at the New York Post this week, who warns that we have to fight the invasion of the killer robots now.

“Killer robots” are taking over. Also known as autonomous weapons, these devices, once activated, can destroy targets without human intervention.

The technology has been with us for years. In 1959, the US Navy started using the Phalanx Close-In Weapon System, an autonomous defense device that can spot and attack anti-ship missiles, helicopters and similar threats. In 2014, Russia announced that killer robots would guard five of its ballistic missile installations. That same year, Israel deployed the Harpy, an autonomous weapon that can stay airborne for nine hours to identify and pick off enemy targets from enormous distances. In 2017, China introduced its own Harpy-type weapon.

But, with the US’s plans to launch drones based on the X-47B in 2023, the invasion of killer robots is going to a new level. These stealth, jet-powered autonomous aircraft can aerially refuel and penetrate deep inside a well-defended territory to gather intelligence and strike enemy targets, a more aggressively lethal tool than we’ve seen before.

Advertisement

I can see how people might be put off by the latest version of the X-47B. While you can technically call them “drones” if you wish, they bear no resemblance to the toys you bought your kid for Christmas. First of all, these things are big. They’re the size of small fighter jets. They have folding wings, can be launched from an aircraft carrier, and are even able to refuel autonomously in mid-flight. Oh… and they’re designed to be bristling with weapons. Here’s a short video from Northrop Grumman of one of them launching and doing an in-flight refueling.

They have tremendous range and firepower and if one of them gets shot down you don’t lose a pilot. They can be controlled remotely in real time, but Del’s biggest concern seems to be the fact that they have enough AI to be programmed to identify targets and take them out on their own if they aren’t in contact with their human masters. They can deliver cannon fire and bombs to targets deemed too dangerous to send manned aircraft after.

So if they’re that “smart” and can really do all that, what if something goes wrong? If they start identifying friendly targets or random civilian facilities, we’ll have a serious problem on our hands. There’s an ethics question here as to whether it’s really worth the risk. But at the same time, we know the Russians and Chinese are working on the same technology so can we really afford to be left behind?

Advertisement

And then there’s the obvious issue of what happens when the AI wakes up and decides that people are really the biggest problem to be addressed. These beasts make some of the stuff in the Terminator movies look bush league by comparison. Do we want the AI having access to this level of firepower right out of the gate? Keep in mind that this isn’t just offbeat humor. These are questions that kept Stephen Hawking awake at night. The time to address these issues is now, not after the first X-47Bs start taking out our own military installations.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement