autonomous cars

On March 19, our Bryan Chaffin published an article about how a “Fatal Crash Leads Uber to Halt Autonomous Car Tests.” The gist of the story is that an Uber test car, in autonomous mode, accidentally killed a pedestrian. (A supervisor human was in the car.)

That’s what people will remember. What most people might gloss over is that the pedestrian was not in a cross-walk and that the accident occurred at night. We might also surmise that someone crossing the street at night, not at an intersection, would take even a human driver by surprise.

But what I want to focus on here is the reporting in the Washington Post, that: “supporters [of autonomous car technology] say fatalities are an inevitable part of the learning process.” I question that view.

The Deep, Emotional Issues

There has been much written about the technology of autonomous vehicles (AVs). We write about it here because of Apple’s involvement, even though the full scope of that effort, Project Titan, seems to have shifted, realigned, and fallen into obscurity lately. Obsessed as we all are with the prospect that technology can someday provide a fully autonomous car, many difficult issues remain.

Notable accidents in which cars in autonomous mode, either in the test phase or production phase, kill a human being could well create a public resentment and pushback against the technology. That could slow down sales to a point where the technology becomes unviable.

Secondly, the car makers would have us believe that, for the sake of technical advancement, there are going to be some fatalties. A contrasting point of view, one which I favor, is that this is one of those famous “Low Probability/High Consequence” events that have been studied of late. These are events like major oil spills, bridge and dike collapses, nuclear reactor runaway chain reactions, or banking collapses for which even a very small, even minuscule, probability of failure is unacceptable.

To be sure, humans die in indirect, human caused ways. Mechanical failures. Bridge collapses. Drunk or texting drivers. But the specter of a semi-intelligent, robotic entity killing a human carries with it such emotional baggage, the stakes change.

I remember an article from Scientific American a long time ago, which I can no longer find, which described events of such catastrophic magnitude that the engineering must push the probability of failure to, essentially, zero. In other words, it’s not an option to trade human lives against engineering compromises. The results of Chernobyl and Fukushima show what happens when the engineering design doesn’t achieve the near zero probability of failure.

Finally, it’s been estimated by the developers of autonomous cars that this technology will save lives in the long run. I have no doubt that it will. But because these companies are in the business of selling a product, I have the feeling that as tragic test and production accidents mount, the corporate estimates of the eventual lives saved will start to be doctored and improved. Just watch.

It’ll almost be like, if I can paraphrase, “the higher our estimates of lives saved in the long run, the more palatable will be the loss of life in the early states of technology.”

I suggest that isn’t going to work with people who believe that contrails in the sky from the condensed jet engine exhaust of airliners are a government conspiracy to cool the planet.

Getting it Right

In my opinion, the public threshold for accidental death tolerance is a lot lower than the developers of autonomous cars would like to believe. For example, we thought going to Mars in the 20th century was a doable proposition. The more we learned, the smarter we got about the challenges and the required technology. Today, we’re thinking about robots going first, building habitats, finding water, and providing assistance to humans when they arrive. New techniques for shielding against solar flares in transit and on the Martian surface are in the works.

Today, we laugh at the brute force techniques proposed in the last century, and we ponder how perilous it would have been for astronauts had we blindly pushed ahead back then.

Similarly, autonomous cars are in a very early phase. The technology seems so very cool, and it’s amazing what cars can do in 2018. But the early adopters are going to press their cars into service in ways that are either illegal, ill-advised, or just plain unanticipated. More humans could die, both inside and outside these AVs.

If there’s enough public outcry, resentment, and even fake-science thrown about, the makers of these systems will either have to go back and rethink the engineering to achieve basically zero failure events or they’ll have to change their market projections for how the technology is going to be delivered to the public. As a result, the technology may not evolve as smoothly as hoped. Or be as inexpensive as hoped.

I don’t think a bit of sensor and AI tweaking, finger-crossing, and self-deception about how the technology will eventually save lives is going to wash with an emotional public that’s on edge and hypersensitive to how technology is persistently failing them in very big and catastrophic ways.

Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Oldest Most Voted
Inline Feedbacks
View all comments

Hello John: There are several related but distinct issues that you’ve raised in your article, of which three have particular relevance to, and will influence the public acceptance of, driverless cars, or for that matter, any emerging technology. Tldr/Uber did the right thing. First, multiple studies have shown that people are poor at risk assessment. There are multiple factors that contribute to our species’ poverty at risk assessment that are nicely summarised and referenced here , but among those factors is that we are poor at baseline assessments (what are the background rates of even commonplace events), our capacity… Read more »


Saw the video this morning. I faced that exact type of situation several times while living in Kissimmee, FL for several years. After the first near miss I adjusted my driving by slowing down and expecting every shadowy figure to be a pedestrian or cyclist crossing into the lane ahead. How did that driverless system fail to “see” that pedestrian? Did it have active radar/infrared or some night vision capability? If not then all such vehicles should be banned from public roadways. If yes then whatever system they are using has not been developed and tested to satisfy readiness for… Read more »


The poor woman has the notable distinction of being the first 3rd party fatality resulting from beta testing. I predict lawyers will soon be vigorously involved.

Lee Dronick

There is video of the Uber car hitting the pedestrian. Two cameras, one looking forward and one of the driver.


Saw the vids. Driver was inattentive, car didn’t detect the woman, the woman was walking. Not good for advocates of self-driving cars.


Make that back up human driver was inattentive.


With so many comments I was reluctant to post but here it is: Cruise Control. How many people use cruise control in town? Why can’t the initial autonomous cars be used only on freeways and interstates, not in town. Surely there’s a way to turn the capability on and off. And keep steering wheels and controls for manual override – still have control sticks and pedals on planes with auto pilot. When does convenience turn into a denial of accountability and responsibility. As we’re often reminded, driving is a privilege not a right. How many people envision themselves watching movies… Read more »

Lee Dronick

How many people use cruise control in town?

I have never used it on service streets, but occasionally when traffic is light on the crosstown freeways. Even on a interstate out in boondocks when you have cruise control enabled you often come up on one big rig doing 60 and one passing it doing 60.00000001. So then you need kick it out of cruise control and just wait it out until you can enable it again 10 miles later.

Roger Wilson

Contrary to headlines and fear-mongers, we are, generally speaking, safer than at any time in history, and as far as vehicular expectations, safer per billion miles than at any time in automotive history despite the available distractions of modern life. Cars are better than at any time, in terms of driver and passenger safety and built-in automotive safety features. Mary should stop worrying. Of course, it had to be Uber, the most careless and inept corporation on the planet, that had the fatality with a human aboard who was supposed to be paying attention to prevent such occurrences. The autonomy… Read more »


I’ve expressed my opposition to auto-car testing on public streets in other threads. I do expect and hope that these vehicles will become the norm. It just seems foolish that this new technology is being tested on public roadways. I have spent years as a professional commercial driver. There are just a multitude of things on the ground that drivers encounter that must be worked out in these drivers less programs. Killing people on public roads isn’t the correct way to do this. Did they test the first rockets with humans onboard or launch them in areas that put the… Read more »

John Kheit

I’ll make a bet right now, this in no way slows down development and release of autonomous cars. The notion that you will have zero accidents during development is beyond myopic and all you see here is an over-bs press trying to hype bs up for clicks. Clickwhoring at its finest. The ultimate question will be how many accidents per mile do these vehicles achieve? If it’s lower than humans, and it will be if not already (Tesla’s cars are having 1/2 the rate per million than humans), then its coming. This article only highlights the bs press and meaningless… Read more »


I suspect what is actually happening is that on-road testing is way less cheaper then building a realistic closed test course and so Uber, Waymo, and everyone else is going down that route BECAUSE GOVERNMENT IS LETTING THEM. What should have happened is that the government with input from Uber et al formulate standards for thorough closed-course testing and set milestones that prototype vehicles should reach before they are allowed to incrementally move into real world testing. As things are, it’s like the Wild West out there. Whatever happened to getting ethical, informed consent when you conduct a potentially harmful… Read more »


I hope you’re right.


FYI: a preliminary statement by the Tempe police department seems to indicate that the accident was unavoidable and that the Über vehicle was not “at fault”. It is important to understand whether or not the accident was due to a vehicle error. If the technology is not yet reliable enough for testing on public roads, changes must be made. However, if the vehicle made no error but it’s response was simply limited by the laws of physics, that is important to know as well. Sadly, there will absolutely be situations in which even a perfectly driven autonomous vehicle will be… Read more »

Old UNIX Guy

Let me begin by stating that I once hit a pedestrian with my car … a number of years ago a young woman jogger decide to try to cross 5 lanes of traffic nowhere near a cross walk. I was in bright sunshine, she was coming from the shady side of the street. I never saw her until the split second before impact. Fortunately, not only was she not killed, her only injury was a badly cut elbow from where it hit (and busted) my windshield. And because I was not at fault, I was not even ticketed – for… Read more »


Watch the video recorded by the car itself. The woman was walking her bike across the road. Certainly doesn’t look like she suddenly darted into the path of the car like a toddler chasing after a ball. It does look like the car’s sensors did not detect her as you claim a proper radar system would and naturally, did not take any evasive action. And the back-up human driver was not watching the road until it was too late. I’d say the videos are very damaging to the cause of self-driving cars. It highlights two weaknesses: The sensors and the… Read more »


I have a car with pushbutton start and “intelligent keys”. The number of lines of code required to control that process is slight compared to that required for autonomous driving. Yet at one point the car refused to start for 24 hours. Often it thinks the key is in the car when it is clearly outside the car and unlocks the door after I specifically locked it. The amount of times the car randomly beeps at me for no apparent reason (related to this system) is frustratingly high. They can’t even make this fool-proof. I have a friend who has… Read more »


Out of curiosity what kind of car? We have the same thing with our 2014 Toyota Prius and it’s been flawless. In fact it’s warned us several times accidentally leaving the key in the car when we got out.


2011 Nissan Altima. It’s annoying but livable, as it is a one-time thing. Very nice car aside from that. Toyota and Subaru cross-share some technology. You should hear the parking lot at our local Target. Lots of identical beeps.


Good points, PSMacintosh. I bought what to me seems a semi-autonomous car a year ago. Amongst other things, it has the ability to recognise and follow speed limits. Rubbish! It is, indeed, fitted with the technology to do this, but it simply does not work because the quality of the signing where I live in Spain is itself rubbish. I live in an “urbanisation” where the limit is said to be 40 K/h (there used to be one sign with this limit indicated, but it has disappeared). The car therefore has no limit when I drive within the urbanisation (in… Read more »


great post i will share this link with my friends & family


A. What do we (or the authorities/cities allowing real world testing) really know about the algorithms that are built into these driverless systems? I suspect that these governmental “approval” decisions are being made by “weak thinking” politicians and based on poor, insufficient information about what is really in the software in the first place. What do we know about what testing conditions have been thrown at these cars before they have hit our streets? Did a deer jump out of the woods in front of the car at night in testing? Did the car swerve? Which way? Did a child… Read more »


All good questions. Questions that have been pondered by those working on self driving vehicles, and others. Many of them have no “right” answer, making it difficult to decide what the algorithm should do. I would suggest you read this Wikipedia article on the History of Autonomous Cars It is something that has been investigated since the 1920s. DARPA had a contest for autonomous vehicles in 2004, 2005, and 2007. There have been millions of miles driven by self driving cars both on the track and on the street under various conditions and with many different unexpected challenges. Uber… Read more »


I will tell you what will happen if ever self-driving cars are adopted en masse: The roadways will be regimented and regulated to the point that eventually human-driven cars will be banned whether by fiat or de facto. What will happen is that governments, in the name of ‘safety’, will impose regulations designed to make roads more predictable and uniform and thus make them simpler computational problems for driverless cars. First we will get restricted lanes for driver-less cars only. Then whole areas (i.e. destinations) will be banned for human driven cars. And in the end, human-driven cars will be… Read more »


From the git-go of hearing about computer-driven (driverless) cars, I was initially struck by a sense of there’s no way that this is going to be allowed (be safe enough) any time soon. And then there they were: driverless cars were being produced and tested. Wow! The HOPE of it…..the WANT of it….made me think (for a moment) that it is possible to them to be safe and soon forthcoming. But, let’s face it. I’ve been ignoring my entire lifetime of experience with computers. They’re fallible as hell! I’ve got a brilliant iMac computer sitting on my desk right now… Read more »


There’s a huge difference between a desktop system that has to do a hundred things adequately, often at the same time, and a dedicated system designed to one thing nearly perfectly. Automatic, automated systems are all around us, and for the most part we don’t even notice them. Commercial aircraft fly on a computerized autopilot most of the time. Same with large ships, there isn’t a sailor standing at the wheel 24/7. Automated manufacturing systems grab material, select tools, and follow a machining path, all day. Environmental controls run for months, years at a time without attention. For that matter… Read more »

Lee Dronick

They didn’t abandon jet aircraft engines because the deHavilland Comet crashed. They learned from it and moved on.

Wasn’t that airrframe problems not engines that caused the crashes?


True. they were pushing a lot of new technologies with the Comet. Jets, pressurization, and other new techniques. But for the general public it was the First Jet Airliner That Kept Crashing.

Fun Fact: I always thought they dropped the Comet after the crashes. It turns out that they learned from the mistakes, refined the design multiple time,s and they were a fairly successful line. The Comet 4 was even repurposed as the Nimrod patrol aircraft and were used until 2011


You make it sound that a car driving down the road at a fast pace on a constantly changing landscape rife with unpredictable, sometimes irrational, humans moving about is not a complex computational setting. Just the fact that desktop computers pre-dated the first self-driving cars by about 45 years of hardware and software development belies your implication.


Who pays? That’s what the public will want to know. If it had been a human driver killing the pedestrian, there would probably be a trial. The driver might or might not go to jail. Also, someone might have to pay considerable damages to the family. This last point is important because the developers of autonomous vehicles have deep pockets. Who will pay and how much? If the deep pockets have insulated themselves financially, there will be deep resentment. If they have not, it could be financially ruinous. On a technical note: was it really “night time” for the autonomous… Read more »

Lee Dronick

Pedestrians will be required to wear IFF transponders and be pinged by driverless cars 😀

Who pays? Great question. This whole industry will hinge on insurance rates. No one is going to buy an autonomous car if the insurance rates are way too high. That’s what’s going to kill this.


Sadly we live in a society that is so risk-averse that much technological advancement has moved overseas. Imagine if the airplane was required to have a zero chance of failure. Aviation would have been abandoned in 1908. Or the automobile, development would have stopped in 1896. Steam train? It would have fallen off the rails in 1830. The list goes on and on. As you said we could have sent people to Mars in the last century. Yes it would have been risky. Yes the astronauts could have been killed or at least had long lasting effects from the trip.… Read more »


Thank you for your kind words. You make a very good point, the Astronauts did go in understanding the risks and voluntarily took them. And yes Mary and her kids assume the car they get into is safe, but that’s the problem. The illusion of safety. None of us is safe. Not when we’re behind the wheel, not when we’re sitting in a bus, not riding in an airplane, not sitting at home. We have this illusion that technology can make us absolutely totally unequivocally safe. Safe from road hazards, safe from machine faults, safe from accidents of nature, safe… Read more »


autonomous cars will definitely kill people, and newer and better models will keep coming.

The upside is the car manufactures should be able to be sued, much deeper pockets, for financial relief.