One day in the not too distant future, it’s possible that artificial intelligence will reach the point where it will “wake up” and begin acting like a true, sentient entity. (In my opinion, it’s also possible that there’s something unique about the human brain/soul which can’t be reproduced inside a computer and they will always fall short of true sentience.) Combining this possibility with the rapid advancements we’re seeing in robotics opens up all sorts of possibilities. Since most robots are built to perform some sort of work, questions of ethics are now being raised. Andrew J. Sherman, writing at CNBC, attempts to tackle some of these issues. For example… will these new robotic workers have any sort of “rights” similar to human employees?
By the year 2025, robots and machines driven by artificial intelligence are predicted to perform half of all productive functions in the workplace. What is not clear is whether the robots will have any worker rights.
Companies across many industries already have robots in their workforce. DHL uses autonomous robots by Fetch Robotics to help fulfillment center and warehouse employees, while Toyota, Google and Panasonic are among the companies that use Fetch’s mobile manipulator technology in research efforts.
Humans already have shown hatred toward robots, often kicking robot police resources over or knocking down delivery bots in hopes of reclaiming a feeling of security or superiority. Incidents of violence against machines are nothing new. Man has been at odds with machines for many decades. We kick the car when it does not operate, shove the vending machine when it does not dispense, and bang at the sides of the printer when it does not produce a copy. What is new is that it will only be a matter of time before the automated creatures will “feel” this hostility and/or feel the need to retaliate.
I had to read this article a couple of times in an attempt to figure out whether the author is completely serious or if this editorial is at least partially tongue-in-cheek. He includes an entire paragraph full of hypothetical questions about robots in the workplace. These include issues of whether or not robots can be racist. Can they be sexually harassed? Flipping that one 180 degrees, can robots sexually harass humans? Sherman goes so far as to wonder whether or not robots should be able to join unions and file workplace grievances.
This clearly isn’t something he dreamed up on his own. It turns out that the EU Parliament is already considering a resolution defining the legal status of “electronic persons.” (Yeesh)
Isn’t Sherman actually missing the larger picture here? It’s all well and good to sit and ponder whether or not a new AI entity is actually “a person” and deserving of some level of compassion. But the bigger issue is what happens when the robots figure out that humans are the actual cause of most of the problems on the planet and decide to take us out. As I’ve said here countless times, when the robot revolution arrives, it will come bursting out of the doors of the laboratories of Boston Dynamics.
And that guy who is always kicking the robots to try to knock them over and hitting them with a broomstick will be the first against the wall when the war begins.