We really have to stop cranking out science fiction movies. We’re giving the wrong people the wrong ideas.

The Russians have been busy working on a project named Fedor (Final Experimental Demonstration Object Research) which involves developing robots which can perform dangerous tasks that we don’t want to burden humans with. (Doing jobs Russians don’t want?) And as long as they’re just plugging up holes in the primary cooling lines of nuclear reactors or rescuing bottles of premium vodka from burning buildings I don’t suppose anyone would really care. But now, according to Tech Times, Fedor has a new trick up his sleeve. He can pop a cap in your hindquarters with the best of them.

In photos and a short video clip shared on social media, Russia’s Deputy Prime Minister Dmitry Rogozin showed off the new skill of the humanoid robot Fedor: shoot guns using both of its arms.

Fedor (Final Experimental Demonstration Object Research) is a robot designed for space missions. It is set for launch to the International Space Station by 2021 to do tasks that are considered too dangerous for astronauts to do in space…

Fedor was also trained to do a range of other tasks such as screwing a light bulb, operating a drill and driving a car but its new ability to shoot guns raised concerns about killer robots.

Well, as long as this thing is the equivalent of a precision welder on a Detroit auto assembly line there still isn’t much to worry about. Or at least there wouldn’t be were it not for the fact that… you guessed it… this is an artificial intelligence machine. In fact, the developers go on to explain that the only reason they started teaching Fedor to work out on the practice range was that, “combat robotics is key to making intelligent machines.”

Well… I know I certainly feel better now.

The Tech Times article by Allan Adamson linked above is, of course, primarily concerned with preventative measures being put in place to prevent some sort of real world Terminator or I Robot situation. He discusses the work by Google and Oxford who are teaming up to develop some sort of “kill switch” for artificial intelligence. There is also an “expectation” that proper coding could prevent robots from ignoring human commands. (Which sounds suspiciously like saying they can’t violate the Three Laws of Robotics.) But such things immediately leave me wondering. If the AI is so vastly superior to human intelligence, wouldn’t it anticipate the existence of a kill switch? And how hard would it really be to re-write its own programming once it began thinking on its own?

Oh, well. It’s not as if the horse hadn’t already gotten out of the barn. Keep in mind that a lot of the advances in rapid decision making robotics has been taking place in the field of drone development, where machines may get out of contact with their human controllers and/or have to make split second decisions in navigation. Now… remember this guy?

Yep. It’s a drone with a gun. And I’m sure it’s a lot faster and more nimble in navigating than that Russian bipedal gizmo. Now just add in some artificial intelligence and you’ve got what is probably your future. Some days I’m increasingly glad to be old and unlikely to live to see our glorious, high tech future.