Our very attempts to stave off disaster make unpredictable outcomes more likely

When Germanwings flight 9525 flew directly into the side of a mountain in the French Alps, killing all on board, investigators discovered that one cause was the safety system itself, put in place in aircraft after the 9/11 attacks. The Germanwings captain, leaving the cockpit for the bathroom, was locked out by the co-pilot, Andreas Lubitz, who then set the autopilot to descend into a mountain, killing all 144 passengers and six crew on board. Like perhaps the Boeing 737 Max tragedy, and even Notre Dame, the accident seems predictable in hindsight. It also shows the sad wisdom of Perrow’s decades-old warning. On flight 9525, the cockpit door was reinforced with steel rods, preventing a terrorist break-in, making it impossible for the captain to break in as well. When Lubitz failed to respond to the distraught captain’s pleas to open the door, the captain attempted to use his door code to re-enter. Unfortunately, the code could be overridden from the cockpit (presumably as further defense against entry), which is precisely what happened. It was Lubitz only in the cockpit—suicidal, as we now know—for the remainder of the tragic flight. It’s tempting to call this a case of human will (and it was), but the system put in place to prevent pernicious human will enabled it.

Advertisement

The increasing complexity of modern human-machine systems means that, depressingly, unforeseen failures are typically large-scale and catastrophic. The collapse of the real estate market in 2008 could not have happened without derivatives designed not to amplify financial risk, but to help traders control it. Boeing would never have put the 737 Max’s engines where it did, but for the possibility of anti-stall software making the design “safe.”

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement