On March 19, our Bryan Chaffin published an article about how a “Fatal Crash Leads Uber to Halt Autonomous Car Tests.” The gist of the story is that an Uber test car, in autonomous mode, accidentally killed a pedestrian. (A supervisor human was in the car.)
That’s what people will remember. What most people might gloss over is that the pedestrian was not in a cross-walk and that the accident occurred at night. We might also surmise that someone crossing the street at night, not at an intersection, would take even a human driver by surprise.
But what I want to focus on here is the reporting in the Washington Post, that: “supporters [of autonomous car technology] say fatalities are an inevitable part of the learning process.” I question that view.
The Deep, Emotional Issues
There has been much written about the technology of autonomous vehicles (AVs). We write about it here because of Apple’s involvement, even though the full scope of that effort, Project Titan, seems to have shifted, realigned, and fallen into obscurity lately. Obsessed as we all are with the prospect that technology can someday provide a fully autonomous car, many difficult issues remain.
Notable accidents in which cars in autonomous mode, either in the test phase or production phase, kill a human being could well create a public resentment and pushback against the technology. That could slow down sales to a point where the technology becomes unviable.
Secondly, the car makers would have us believe that, for the sake of technical advancement, there are going to be some fatalties. A contrasting point of view, one which I favor, is that this is one of those famous “Low Probability/High Consequence” events that have been studied of late. These are events like major oil spills, bridge and dike collapses, nuclear reactor runaway chain reactions, or banking collapses for which even a very small, even minuscule, probability of failure is unacceptable.
To be sure, humans die in indirect, human caused ways. Mechanical failures. Bridge collapses. Drunk or texting drivers. But the specter of a semi-intelligent, robotic entity killing a human carries with it such emotional baggage, the stakes change.
I remember an article from Scientific American a long time ago, which I can no longer find, which described events of such catastrophic magnitude that the engineering must push the probability of failure to, essentially, zero. In other words, it’s not an option to trade human lives against engineering compromises. The results of Chernobyl and Fukushima show what happens when the engineering design doesn’t achieve the near zero probability of failure.
Finally, it’s been estimated by the developers of autonomous cars that this technology will save lives in the long run. I have no doubt that it will. But because these companies are in the business of selling a product, I have the feeling that as tragic test and production accidents mount, the corporate estimates of the eventual lives saved will start to be doctored and improved. Just watch.
It’ll almost be like, if I can paraphrase, “the higher our estimates of lives saved in the long run, the more palatable will be the loss of life in the early states of technology.”
I suggest that isn’t going to work with people who believe that contrails in the sky from the condensed jet engine exhaust of airliners are a government conspiracy to cool the planet.
Getting it Right
In my opinion, the public threshold for accidental death tolerance is a lot lower than the developers of autonomous cars would like to believe. For example, we thought going to Mars in the 20th century was a doable proposition. The more we learned, the smarter we got about the challenges and the required technology. Today, we’re thinking about robots going first, building habitats, finding water, and providing assistance to humans when they arrive. New techniques for shielding against solar flares in transit and on the Martian surface are in the works.
Today, we laugh at the brute force techniques proposed in the last century, and we ponder how perilous it would have been for astronauts had we blindly pushed ahead back then.
Similarly, autonomous cars are in a very early phase. The technology seems so very cool, and it’s amazing what cars can do in 2018. But the early adopters are going to press their cars into service in ways that are either illegal, ill-advised, or just plain unanticipated. More humans could die, both inside and outside these AVs.
If there’s enough public outcry, resentment, and even fake-science thrown about, the makers of these systems will either have to go back and rethink the engineering to achieve basically zero failure events or they’ll have to change their market projections for how the technology is going to be delivered to the public. As a result, the technology may not evolve as smoothly as hoped. Or be as inexpensive as hoped.
I don’t think a bit of sensor and AI tweaking, finger-crossing, and self-deception about how the technology will eventually save lives is going to wash with an emotional public that’s on edge and hypersensitive to how technology is persistently failing them in very big and catastrophic ways.