John covered this story last night, but the death of a pedestrian struck by a self-driving Uber vehicle in Arizona should have ramifications for the entire idea of autonomous vehicles. (Something I’ve been concerned about for a couple of years now.) After covering the initial reports from the accident, John concluded with the following observations and questions.

There will be an investigation of this accident as well, but my first thought is to wonder why the human ‘backup driver’ didn’t stop the car and prevent this. Reliable self-driving cars and trucks may still be a couple years away but it’s worth pointing out that human drivers are responsible for tens of thousands of fatal accidents on the roads every year. In 2016, there were an estimated 40,200 fatal crashes. Ultimately, the question is whether the record of driverless cars turns out to be better or worse than the humans who would otherwise be at the wheel.

Before addressing those points, it’s worth noting that new information has been provided by authorities investigating the accident. While it will take a while to sort this all out, initial findings indicate that the car probably wasn’t at fault in this case and the test monitor probably wouldn’t have been able to prevent the accident even if they’d been in complete control. (Fortune)

“The driver said it was like a flash, the person walked out in front of them,” Moir said. “His first alert to the collision was the sound of the collision.”

According to the Chronicle, the preliminary investigation found the Uber car was driving at 38 mph in a 35 mph zone and did not attempt to brake. Herzberg is said to have abruptly walked from a center median into a lane with traffic. Police believe she may have been homeless.

Since there’s dashcam video of the entire incident, the police should be able to sort this out without too many questions going unanswered. Going by their description, the possibly homeless and confused woman was pushing a bicycle in a median strip when she suddenly veered out into traffic directly in front of the Uber vehicle which was going nearly 40 mph. Assuming the next lane of traffic was blocked by another vehicle, the car would have had no other option than to possibly try to drive up onto the median. (It looks like it would have been physically impossible to stop the vehicle in that short span.) But the car’s programming clearly wasn’t anticipating a person diving out in front of it and a human being likely couldn’t crank the wheel over in a split second to avoid her either.

So Uber is off the hook and testing of autonomous vehicles can resume presently, right? I honestly hope not. The woman’s death is a tragedy, but this accident should also give us pause to ask whether any autonomous system will ever be able to replace a human being for such tasks. The woman appears to have done something completely unexpected which the navigation software had no reason to anticipate, but the fact is that irrational, unexpected things do happen in the real world all the time. And it’s in those razor-thin moments of doubt that a human being will always best a machine.

NASA regularly argues that manned space exploration will always be superior to drones and robots because human beings are more adaptable. We simply see the complexity of the world around us in a way that no set of logical rules coded into the most complex software will ever match. Humans are also able to imagine things in a way that computers can’t, including the most unexpected. Take the idea of color for example. A computer can analyze a video image and assign a value to a given color. But there’s a limit to the number of colors it can recognize and it has to force the object into one of those pigeonholes, even if they number in the thousands. In reality, there are an infinite number of colors, with each subtle shift in light frequency blending from one to the next. A system built on ones and zeros will never grasp that.

Returning to the auto accident scenario, the car was unable to anticipate a possibly homeless and confused woman suddenly lurching out in front of it. The car may have identified her as a pedestrian, but that’s not what pedestrians are “supposed to do.” But a human driver, under other circumstances, may have noticed things about her such as disheveled clothes or an unsteady rhythm to her gait. Seeing that, a human could have slowed down in advance, wondering if she was about to do something crazy. Do you honestly believe that an autonomous car is going to be capable of that sort of thought process? And none of this even begins to address the potential problems with hacking and terrorism.

Cars need drivers for precisely this reason. John was right to point out that we’re far from perfect and humans cause many, many accidents each year. With that in mind, some technology could likely improve our record. Perhaps some of those collision detection systems which are able to slam on the brakes when they locate an object we’re about to strike could be added to most vehicles. Sensors which detect a sleepy driver nodding off and sound an alarm to fully wake them might save many lives. But we should still keep a human being at the wheel as the primary operator. Autonomous driving software isn’t going to match the human mind.