Autonomous Cars That Kill People Will Kill The Technology. For Good

3 minute read
| Editorial

autonomous cars

On March 19, our Bryan Chaffin published an article about how a “Fatal Crash Leads Uber to Halt Autonomous Car Tests.” The gist of the story is that an Uber test car, in autonomous mode, accidentally killed a pedestrian. (A supervisor human was in the car.)

That’s what people will remember. What most people might gloss over is that the pedestrian was not in a cross-walk and that the accident occurred at night. We might also surmise that someone crossing the street at night, not at an intersection, would take even a human driver by surprise.

But what I want to focus on here is the reporting in the Washington Post, that: “supporters [of autonomous car technology] say fatalities are an inevitable part of the learning process.” I question that view.

The Deep, Emotional Issues

There has been much written about the technology of autonomous vehicles (AVs). We write about it here because of Apple’s involvement, even though the full scope of that effort, Project Titan, seems to have shifted, realigned, and fallen into obscurity lately. Obsessed as we all are with the prospect that technology can someday provide a fully autonomous car, many difficult issues remain.

Notable accidents in which cars in autonomous mode, either in the test phase or production phase, kill a human being could well create a public resentment and pushback against the technology. That could slow down sales to a point where the technology becomes unviable.

Secondly, the car makers would have us believe that, for the sake of technical advancement, there are going to be some fatalties. A contrasting point of view, one which I favor, is that this is one of those famous “Low Probability/High Consequence” events that have been studied of late. These are events like major oil spills, bridge and dike collapses, nuclear reactor runaway chain reactions, or banking collapses for which even a very small, even minuscule, probability of failure is unacceptable.

To be sure, humans die in indirect, human caused ways. Mechanical failures. Bridge collapses. Drunk or texting drivers. But the specter of a semi-intelligent, robotic entity killing a human carries with it such emotional baggage, the stakes change.

I remember an article from Scientific American a long time ago, which I can no longer find, which described events of such catastrophic magnitude that the engineering must push the probability of failure to, essentially, zero. In other words, it’s not an option to trade human lives against engineering compromises. The results of Chernobyl and Fukushima show what happens when the engineering design doesn’t achieve the near zero probability of failure.

Finally, it’s been estimated by the developers of autonomous cars that this technology will save lives in the long run. I have no doubt that it will. But because these companies are in the business of selling a product, I have the feeling that as tragic test and production accidents mount, the corporate estimates of the eventual lives saved will start to be doctored and improved. Just watch.

It’ll almost be like, if I can paraphrase, “the higher our estimates of lives saved in the long run, the more palatable will be the loss of life in the early states of technology.”

I suggest that isn’t going to work with people who believe that contrails in the sky from the condensed jet engine exhaust of airliners are a government conspiracy to cool the planet.

Getting it Right

In my opinion, the public threshold for accidental death tolerance is a lot lower than the developers of autonomous cars would like to believe. For example, we thought going to Mars in the 20th century was a doable proposition. The more we learned, the smarter we got about the challenges and the required technology. Today, we’re thinking about robots going first, building habitats, finding water, and providing assistance to humans when they arrive. New techniques for shielding against solar flares in transit and on the Martian surface are in the works.

Today, we laugh at the brute force techniques proposed in the last century, and we ponder how perilous it would have been for astronauts had we blindly pushed ahead back then.

Similarly, autonomous cars are in a very early phase. The technology seems so very cool, and it’s amazing what cars can do in 2018. But the early adopters are going to press their cars into service in ways that are either illegal, ill-advised, or just plain unanticipated. More humans could die, both inside and outside these AVs.

If there’s enough public outcry, resentment, and even fake-science thrown about, the makers of these systems will either have to go back and rethink the engineering to achieve basically zero failure events or they’ll have to change their market projections for how the technology is going to be delivered to the public. As a result, the technology may not evolve as smoothly as hoped. Or be as inexpensive as hoped.

I don’t think a bit of sensor and AI tweaking, finger-crossing, and self-deception about how the technology will eventually save lives is going to wash with an emotional public that’s on edge and hypersensitive to how technology is persistently failing them in very big and catastrophic ways.

36 Comments Add a comment

  1. JustCause

    autonomous cars will definitely kill people, and newer and better models will keep coming.

    The upside is the car manufactures should be able to be sued, much deeper pockets, for financial relief.

  2. geoduck

    Sadly we live in a society that is so risk-averse that much technological advancement has moved overseas. Imagine if the airplane was required to have a zero chance of failure. Aviation would have been abandoned in 1908. Or the automobile, development would have stopped in 1896. Steam train? It would have fallen off the rails in 1830. The list goes on and on. As you said we could have sent people to Mars in the last century. Yes it would have been risky. Yes the astronauts could have been killed or at least had long lasting effects from the trip. But it could have been done. Today however it is unlikely that we would have the courage, either individually or as a country, to send people as far as the moon, let alone Mars.. Heck, I doubt that something akin to the Lewis and Clark Expedition could get support. Oh no, now we’d better send robots to prepare a shelter and turn down the bed before anyone goes. Buzz Aldrin slept curled up on the ascent engine cover in the LEM. After Gene Cernan’s near disaster staging the LEM on Apollo 10, he immediately wanted to go back. That’s why those guys are my heroes. They had courage.

    When the Challenger exploded in 1986 I was struck by the tragedy. I was however even more disturbed by the response of both adults and many of the kids in the classes I taught. I heard over and over that if there was any chance of someone getting killed maybe we just shouldn’t send people into space. I wondered what happened to the Home of the Brave. What happened to the country that continued the Apollo Program despite the 204 fire. I kept wondering what happened to the country that spawned Lindbergh and Armstrong. Earhart and Glenn. There now does not seem to be any tolerance for risk. No balancing of the cost vs possible reward. No one is willing to take a shot and see if they can do it. And if someone does try something and fail, they are not encouraged to give it another shot. They are crucified in the press. Especially if there was any hint of public support for the venture. A crippling loss of nerve has gripped the West and they will soon be eclipsed by others who ARE willing to take the risk.

    The next people to walk on the moon will not speak English. The same is almost certainly true for the first people to walk on Mars.

    • John Martellaro

      geoduck. I know you are a regular reader here, and you make important contributions. I have never failed to respect your opinions.

      In this case, I must quibble.

      You cite the most adventurous efforts by human kind. First flight (at their own risk), Apollo, Challenger and Mars. These noble efforts are done by test pilots or other trained people who are aware of the risks. The assume those risks for the benefit of mankind’s most important endeavors.

      But when young Mary gets into her autonomous car with her two kids, she expects to be safe. She’s not asked to shoulder risk for all of mankind’s dreams.. She expects to be safer than sitting on 5 million pounds of LOX and kerosene. Our standards are higher in her case..

      I know that car companies make cost versus safety trade-offs all the time. Remember the infamous Ford Pinto gas tank affair? Or the GM ignition key affair? But at least we think we’re in control. Turning our very lives and those of our family over to a smart machine that can have software bugs is a scary proposition. If the developers don’t earn sufficient trust, it’ll all go downhill.

      That’s my opinion.

      • geoduck

        Thank you for your kind words.

        You make a very good point, the Astronauts did go in understanding the risks and voluntarily took them. And yes Mary and her kids assume the car they get into is safe, but that’s the problem. The illusion of safety. None of us is safe. Not when we’re behind the wheel, not when we’re sitting in a bus, not riding in an airplane, not sitting at home. We have this illusion that technology can make us absolutely totally unequivocally safe. Safe from road hazards, safe from machine faults, safe from accidents of nature, safe from a nut-job with a gun. But it isn’t true. Everything is a risk. A calculated risk. You have a measurable risk of having a heart attack just by getting up out of bed. Unfortunately in the West we have lost our understanding of this. People want things to be totally safe when such a thing can not exist. We have lost our nerve. Other countries haven’t and they will benefit from what the West is afraid of. You may very well be right and self driving cars may go away here. But the technology will continue in China, in India, and elsewhere because they understand cost vs benefit analyses.

  3. macjeffff

    Who pays? That’s what the public will want to know.
    If it had been a human driver killing the pedestrian, there would probably be a trial. The driver might or might not go to jail. Also, someone might have to pay considerable damages to the family. This last point is important because the developers of autonomous vehicles have deep pockets. Who will pay and how much? If the deep pockets have insulated themselves financially, there will be deep resentment. If they have not, it could be financially ruinous.
    On a technical note: was it really “night time” for the autonomous vehicle? Is it harder for the technology to “see” the pedestrian at night? Is the vehicle only alerted to formal crosswalks? Unlikely.

    • JBSlough

      Who pays? Great question. This whole industry will hinge on insurance rates. No one is going to buy an autonomous car if the insurance rates are way too high. That’s what’s going to kill this.

  4. PSMacintosh

    From the git-go of hearing about computer-driven (driverless) cars, I was initially struck by a sense of there’s no way that this is going to be allowed (be safe enough) any time soon.

    And then there they were: driverless cars were being produced and tested.
    Wow! The HOPE of it…..the WANT of it….made me think (for a moment) that it is possible to them to be safe and soon forthcoming.

    But, let’s face it. I’ve been ignoring my entire lifetime of experience with computers. They’re fallible as hell!

    I’ve got a brilliant iMac computer sitting on my desk right now (and NOT driving a moving car) that has software, and sometimes even hardware, issues on a fairly regular basis. (Mostly it’s software issues…….or unknown-cause issues.)
    As much as I think highly of the reliability of my desktop computer (because it’s up most of the time), I’m constantly bumping into the problems that it and it’s software have.
    I would NEVER trust it with my life…….to be driving me around in a car for a year. I would know, for certain, that its going to have a hiccup of some sort. Do I want to be in a car with it driving 60 miles an hour while in the vicinity of concrete walls and metal poles?

    Until Apple (or anyone else) can give me a desktop computer and matching software that works without a hitch for 99% of the time for a solid year, I’m not going to believe/trust that a computer can safely drive a car…………unless I’m completely drunk on the “cool-aid”–of being a faith believer.

    I was letting myself be a bit drunk, up until this incident. Now I will be more realistic.
    As much as I want it, driverless cars shouldn’t be on the roadways for another 30 years (of intensive, controlled-environment studies).

    • geoduck

      There’s a huge difference between a desktop system that has to do a hundred things adequately, often at the same time, and a dedicated system designed to one thing nearly perfectly. Automatic, automated systems are all around us, and for the most part we don’t even notice them. Commercial aircraft fly on a computerized autopilot most of the time. Same with large ships, there isn’t a sailor standing at the wheel 24/7. Automated manufacturing systems grab material, select tools, and follow a machining path, all day. Environmental controls run for months, years at a time without attention. For that matter medical monitors and sensors not only watch over us in the hospital, but can even administer drugs as needed.

      I’m willing to accept that for the moment self driving cars might be a bad idea in urban areas. I have no problem with them on the freeway, on highways, even on rural roads. Let’s start allowing them in these environments. Let them improve and then when they get better let them into more urban areas. But the idea of taking them off the road after one accident is absurd. This is the future of automotive technology, if we have the courage to embrace it.

      They didn’t abandon jet aircraft engines because the deHavilland Comet crashed. They learned from it and moved on.

      • Lee Dronick

        They didn’t abandon jet aircraft engines because the deHavilland Comet crashed. They learned from it and moved on.

        Wasn’t that airrframe problems not engines that caused the crashes?

      • geoduck

        True. they were pushing a lot of new technologies with the Comet. Jets, pressurization, and other new techniques. But for the general public it was the First Jet Airliner That Kept Crashing.

        Fun Fact: I always thought they dropped the Comet after the crashes. It turns out that they learned from the mistakes, refined the design multiple time,s and they were a fairly successful line. The Comet 4 was even repurposed as the Nimrod patrol aircraft and were used until 2011

      • aardman

        You make it sound that a car driving down the road at a fast pace on a constantly changing landscape rife with unpredictable, sometimes irrational, humans moving about is not a complex computational setting. Just the fact that desktop computers pre-dated the first self-driving cars by about 45 years of hardware and software development belies your implication.

  5. PSMacintosh

    A. What do we (or the authorities/cities allowing real world testing) really know about the algorithms that are built into these driverless systems?
    I suspect that these governmental “approval” decisions are being made by “weak thinking” politicians and based on poor, insufficient information about what is really in the software in the first place.

    What do we know about what testing conditions have been thrown at these cars before they have hit our streets?
    Did a deer jump out of the woods in front of the car at night in testing? Did the car swerve? Which way?
    Did a child run out into the street between parked cars? (Not that a human would have been able to avoid the child, but what does the computer tell the car to do?) What choices is the computer programmed to make in the case of two or more “bad” choices. If unable to stop in time to avoid the child, does the car run over the child? Does it crash itself (and occupant) into parked cars to the right? Does it swerve left into a head-on collision with a busload of school children? Does the computer prioritize the safety of the car occupants…..or others?
    We (the general public) have no idea what is built into these programs?

    What we the computer do if a Terrorist Driver, say ISIS, has commandeered a tank (or concrete truck or amores truck) and is purposefully attacking cars and people?

    Can the car occupants override the computer system? When and how?

    What does the car do when a plane is making a forced (crash) landing on the freeway coming over the top of the car onto the road ahead?

    What does the car do when a person purposefully commits suicide by jumping off an overpass in front of the car?

    What about roundabouts? (Ok. that one has probably been tackled. But how about the round about in Hemmel Hempstead, England?)
    What about wrong way drivers?
    What about the decision of whether or not to go into the for-pay Toll Lane?

    I’m not being silly with these questions. I’d like to know “how extensive” the thought-process and actually testing has been. I’m wondering if the “approval” committee has even thought about asking these questions.

    If these types of extensive testing conditions have been programed into the software and thrown at these cars on the test tracks, then the word hasn’t gotten out to me. Frankly, I don’t believe the testing has gone near far enough on the test tracks to warrant these cars being out on the real roads.

    B. Then, today, I came across some really poorly designed roadway lanes in Pleasanton (for the first time) that were really confusing and non-sensical to me. (Bad human designers had recently been involved in changing existing lanes….for some reason (unapparent to me as I drove by).) Furthermore, the signage was poorly placed and gave inadequate and untimely information. And this was in the broad daytime with lots of light and visibility.
    Three lanes of traffic were being narrowed down (far ahead of any signage) into one central lane (if you wanted to proceed straight). It required NOT proceeding down the empty left or right empty open lanes, but awkwardly turning into (diving into) the central lane. And it required consent and participation (allowance) from the other cars.
    I was wondering how a computer system would handle those “weird” lanes?

    And what about roads where the lane paint is unclear–either worn out or not visible due to rain or not marked at all (like in some South American cities)–such that the “lanes” themselves are unclear/uncertain……or cars are driving them in a haphazard fashion.

    I think a computer system might, eventually, be able to do even better than a human, when it can get lots of accurate information from all of the other traffic (cars and signals and external sensors) around it. But are any of the current systems yet communicating with other cars?…..and what are the trust and safety issues with trusting information coming from sensors external to the cars own sensors?

    I know that I, as a human, am constantly trying to “read” the traffic ahead of me for information–looking through cars, watching for taillights ahead–and basing my driving decisions on this concrete or guesstimate information (whether this is right or not).

    The more I think about it, the more I suspect that not enough important questions have been asked/asnwered and that we are being sacrificed as cheap guinea pigs without enough valid, prior test-track testing.

    Good luck to the next victim! (I hope its not a “loved one” of mine.)

    • geoduck

      All good questions. Questions that have been pondered by those working on self driving vehicles, and others. Many of them have no “right” answer, making it difficult to decide what the algorithm should do. I would suggest you read this Wikipedia article on the History of Autonomous Cars
      https://en.wikipedia.org/wiki/History_of_autonomous_cars
      It is something that has been investigated since the 1920s. DARPA had a contest for autonomous vehicles in 2004, 2005, and 2007. There have been millions of miles driven by self driving cars both on the track and on the street under various conditions and with many different unexpected challenges. Uber is just the new guy in a well established realm. While they cannot test every possible eventuality (what if a child’s ball bounces into the cars path, from an overpass above), the logic has gotten better and better, and continues to improve.

      As for when the road becomes poorly marked, rain obscured, or otherwise confusing, I completely understand this. Just a few months ago I came upon a construction site with a lane shift and new pavement with no striping. All I could see was a forest of orange cones and traffic coming at me in my lane. I took the only route I could see, and ended up in the construction site. They were obviously annoyed with me as they had to stop and direct traffic so I could reenter the roadway. However, I take a lesson from aircraft autopilot systems. An aircraft can fly for hours on autopilot, but if is senses that that something is wrong, it has lost critical data, there is a critical failure of some kind, it alerts the pilot and hands control back. I believe Tesla’s (poorly named) autopilot system does this. If conditions become too difficult it starts demanding the driver take over. If the driver does not take the wheel the car slows to a stop and puts the flashers on.

      Are self driving cars perfect? No. will they ever be? No. But neither are human drivers. And over the long run they will be safer than human drivers. I’m reminded of when seatbelts, anti-lock brakes and airbags were introduced. There were people complaining that they would kill people, and every once in a while they still do. But when you look at the big picture they make things better.

    • aardman

      I will tell you what will happen if ever self-driving cars are adopted en masse: The roadways will be regimented and regulated to the point that eventually human-driven cars will be banned whether by fiat or de facto. What will happen is that governments, in the name of ‘safety’, will impose regulations designed to make roads more predictable and uniform and thus make them simpler computational problems for driverless cars. First we will get restricted lanes for driver-less cars only. Then whole areas (i.e. destinations) will be banned for human driven cars. And in the end, human-driven cars will be squeezed off the roads.

      You think this hasn’t happened before? Try riding your horse-drawn buggy down the road. (Yes the economics of horse-drawn vehicles is horrible, but even if you were willing to pay that price, you still won’t be able to give your sweetheart a moonlit ride on your surrey with a fringe on top.)

  6. davidneale

    Good points, PSMacintosh. I bought what to me seems a semi-autonomous car a year ago. Amongst other things, it has the ability to recognise and follow speed limits. Rubbish! It is, indeed, fitted with the technology to do this, but it simply does not work because the quality of the signing where I live in Spain is itself rubbish. I live in an “urbanisation” where the limit is said to be 40 K/h (there used to be one sign with this limit indicated, but it has disappeared). The car therefore has no limit when I drive within the urbanisation (in such cases, it is supposed to find the limit through its GPS, but this seems not to know about it—and, yes, it is up to date). I drive out of the urb and the car picks up the 40 K/h limit at the roundabout at the exit and maintains it correctly when I leave the roundabout until the cancellation sign. The car then thinks I can drive at 90 K/h (the limit is, in fact, 60, but is not specifically shown). A bit further along the road is a spur-road to go to a petrol station. It is signed with a 30 K/h board: the car then thinks it must drive at 30 along the main road. Something somehow then makes the car think it can again drive at 90 for a while (the limit is, in fact, 60), until a 70 K/h sign appears and is recognised.

    As you can guess, I have turned off the automatic speed-limit recognition feature.

    I don’t say this is the car’s fault. The signage is inadequate, clearly. It is also, and this is a vital point, not standardised: different countries in Europe and, presumably, the rest of the world, have different ways of indicating limits, of cancelling them, of positioning the signs and so on.

    If something as fundamental as this can’t be achieved, then a self-driving car is a pipe-dream.

  7. Brutno

    I have a car with pushbutton start and “intelligent keys”. The number of lines of code required to control that process is slight compared to that required for autonomous driving. Yet at one point the car refused to start for 24 hours. Often it thinks the key is in the car when it is clearly outside the car and unlocks the door after I specifically locked it. The amount of times the car randomly beeps at me for no apparent reason (related to this system) is frustratingly high.

    They can’t even make this fool-proof.

    I have a friend who has lane-deviation sensors on his car. In a sloppy Midwest winter they cease to function due to road salt build-up.

    I think we have a long way to go before autonomous driving is perfected.

    • geoduck

      Out of curiosity what kind of car? We have the same thing with our 2014 Toyota Prius and it’s been flawless. In fact it’s warned us several times accidentally leaving the key in the car when we got out.

      • Brutno

        2011 Nissan Altima. It’s annoying but livable, as it is a one-time thing. Very nice car aside from that. Toyota and Subaru cross-share some technology. You should hear the parking lot at our local Target. Lots of identical beeps.

  8. Old UNIX Guy

    Let me begin by stating that I once hit a pedestrian with my car … a number of years ago a young woman jogger decide to try to cross 5 lanes of traffic nowhere near a cross walk. I was in bright sunshine, she was coming from the shady side of the street. I never saw her until the split second before impact.

    Fortunately, not only was she not killed, her only injury was a badly cut elbow from where it hit (and busted) my windshield. And because I was not at fault, I was not even ticketed – for which I am also thankful.

    I recount that incident mainly to point out that an autonomous vehicle … one with radar sweeping in all directions all the time, would almost certainly have been able to detect that young woman much sooner than I did. I realize that an autonomous vehicle is not a requirement for having such a system, but an autonomous vehicle would certainly have been able to react much more quickly than I would have been able to even if I had been warned by some sort of collision detection and avoidance system.

    I hope that those of us who understand this can be persuasive with those who do not … it would be a true shame if a technology that could save thousands of lives each year were derailed by an accident that was not its’ fault.

    Old UNIX Guy

    • aardman

      Watch the video recorded by the car itself. The woman was walking her bike across the road. Certainly doesn’t look like she suddenly darted into the path of the car like a toddler chasing after a ball. It does look like the car’s sensors did not detect her as you claim a proper radar system would and naturally, did not take any evasive action. And the back-up human driver was not watching the road until it was too late.

      I’d say the videos are very damaging to the cause of self-driving cars. It highlights two weaknesses:

      The sensors and the onboard computer are not smart enough to handle all situations.
      Counting on the driver to be a reliable back up when the onboard system fails is a very bad idea because the computer will be turning over control to an inattentive driver at the moment of maximum chaos and uncertainty.

      I will ask may standard rhetorical question again. We can’t even deploy pure self-piloting passenger planes, ships, or trains for which the computational problems are much simpler and yet we’re going whole hog on self-driving cars?

  9. pjs_boston

    FYI: a preliminary statement by the Tempe police department seems to indicate that the accident was unavoidable and that the Über vehicle was not “at fault”.

    It is important to understand whether or not the accident was due to a vehicle error. If the technology is not yet reliable enough for testing on public roads, changes must be made. However, if the vehicle made no error but it’s response was simply limited by the laws of physics, that is important to know as well.

    Sadly, there will absolutely be situations in which even a perfectly driven autonomous vehicle will be involved in loss of life. Worse still, there may be situations in which an autonomous vehicle must choose between who lives and who dies.

    That said, it seems clear that autonomous vehicles have the potential to be significantly safer than vehicles driven by humans. It would be a shame if we, as a society, forgo a lifesaving technology due to unwarranted public fear.

  10. John Kheit

    I’ll make a bet right now, this in no way slows down development and release of autonomous cars. The notion that you will have zero accidents during development is beyond myopic and all you see here is an over-bs press trying to hype bs up for clicks. Clickwhoring at its finest.

    The ultimate question will be how many accidents per mile do these vehicles achieve? If it’s lower than humans, and it will be if not already (Tesla’s cars are having 1/2 the rate per million than humans), then its coming. This article only highlights the bs press and meaningless tweet storms of the day. Not only will this pass, but you will be fighting to keep your ‘privilege’ to actually drive yourself because this pussy nanny ‘no one should ever be harmed ever’ myopia will flip flop telling you how reckless it is for humans to drive when machines have accident rates that far lower.

    In a sense, I agree with both geoduck and John in that the real trend here is the total pussification of our culture. If this culture were around during the 60s and earlier, we’d be all sucking our thumbs wrapped in the only invention that seems to matter today, bubblewrap.

    TLDR; this is noise of the moment. It too shall pass, because ultimately self driving will be safer, and this culture breaks for meekness.

    • aardman

      I suspect what is actually happening is that on-road testing is way less cheaper then building a realistic closed test course and so Uber, Waymo, and everyone else is going down that route BECAUSE GOVERNMENT IS LETTING THEM.

      What should have happened is that the government with input from Uber et al formulate standards for thorough closed-course testing and set milestones that prototype vehicles should reach before they are allowed to incrementally move into real world testing. As things are, it’s like the Wild West out there.

      Whatever happened to getting ethical, informed consent when you conduct a potentially harmful trial? Did anyone seek the consent of the people using the sidewalks and roadways where a driverless test car might be passing through?

  11. skipaq

    I’ve expressed my opposition to auto-car testing on public streets in other threads. I do expect and hope that these vehicles will become the norm. It just seems foolish that this new technology is being tested on public roadways. I have spent years as a professional commercial driver. There are just a multitude of things on the ground that drivers encounter that must be worked out in these drivers less programs. Killing people on public roads isn’t the correct way to do this.

    Did they test the first rockets with humans onboard or launch them in areas that put the public at risk? Do they test new model commercial jet by pulling up to a public and loading the public on it? Hey, we just developed a new cancer drug let’s test it by selling it at Walmart. Look, we are not talking about driverless cars that have been fully developed and tested. They are being developed and tested on public roadways. To me that is insane. They are scaling this up. Do you really want more of these on the road.

    Someone mentioned a deer jumping in front of a car. The problem that presents to any driver can have many variables. What are the conditions of the road? How many lanes of traffic does the road have? How much if any open space is around your car? Are there pedestrians or cyclists on the roads? A human driver can process all that and more in order to react properly. Simply being able to hit the brakes faster than a human isn’t always the right reaction.

  12. Roger Wilson

    Contrary to headlines and fear-mongers, we are, generally speaking, safer than at any time in history, and as far as vehicular expectations, safer per billion miles than at any time in automotive history despite the available distractions of modern life. Cars are better than at any time, in terms of driver and passenger safety and built-in automotive safety features. Mary should stop worrying.

    Of course, it had to be Uber, the most careless and inept corporation on the planet, that had the fatality with a human aboard who was supposed to be paying attention to prevent such occurrences.

    The autonomy will still happen, but this guarantees that insurers will want more layers of control built-in before they’ll insure such vehicles.

  13. Ned

    With so many comments I was reluctant to post but here it is: Cruise Control. How many people use cruise control in town? Why can’t the initial autonomous cars be used only on freeways and interstates, not in town. Surely there’s a way to turn the capability on and off. And keep steering wheels and controls for manual override – still have control sticks and pedals on planes with auto pilot.

    When does convenience turn into a denial of accountability and responsibility. As we’re often reminded, driving is a privilege not a right. How many people envision themselves watching movies while the car does the driving (or maybe doing a line or having a beer)? What happened to buses and trains – remember “Leave The Driving To Us.”? If you’re going to be out on the street or highway, be on the street or highway. Call it a Zen approach; honor your driving, pay attention to what’s going on around you. Enjoy the experience of confidently handling a large piece of machinery. Why buy a car at all? Pay someone else to drive you in their car – Uber? Lyft? Taxis? Just because we have the technology and capability to do something doesn’t always make it the thing to do.

    • Lee Dronick

      How many people use cruise control in town?

      I have never used it on service streets, but occasionally when traffic is light on the crosstown freeways. Even on a interstate out in boondocks when you have cruise control enabled you often come up on one big rig doing 60 and one passing it doing 60.00000001. So then you need kick it out of cruise control and just wait it out until you can enable it again 10 miles later.

    • aardman

      Saw the vids. Driver was inattentive, car didn’t detect the woman, the woman was walking. Not good for advocates of self-driving cars.

      • aardman

        Make that back up human driver was inattentive.

  14. skipaq

    Saw the video this morning. I faced that exact type of situation several times while living in Kissimmee, FL for several years. After the first near miss I adjusted my driving by slowing down and expecting every shadowy figure to be a pedestrian or cyclist crossing into the lane ahead.

    How did that driverless system fail to “see” that pedestrian? Did it have active radar/infrared or some night vision capability? If not then all such vehicles should be banned from public roadways. If yes then whatever system they are using has not been developed and tested to satisfy readiness for public roads. Do development and testing away from the public.

    • aardman

      The poor woman has the notable distinction of being the first 3rd party fatality resulting from beta testing. I predict lawyers will soon be vigorously involved.

  15. wab95

    Hello John:

    There are several related but distinct issues that you’ve raised in your article, of which three have particular relevance to, and will influence the public acceptance of, driverless cars, or for that matter, any emerging technology.

    Tldr/Uber did the right thing.

    First, multiple studies have shown that people are poor at risk assessment. There are multiple factors that contribute to our species’ poverty at risk assessment that are nicely summarised and referenced here https://www.psychologytoday.com/us/blog/the-inertia-trap/201303/why-are-people-bad-evaluating-risks , but among those factors is that we are poor at baseline assessments (what are the background rates of even commonplace events), our capacity to translate quantitative assessments in units to meaningful probability, and very importantly, our lay understanding of the scientific use of terms like ‘probable’, ‘possible’ and ‘unlikely’, and that for scientists and researchers these are technical terms with standardised usage that differs from popular use, and are therefore frequently misinterpreted by the public. However, chief among these factors that compromise our risk assessment are novelty and impact; the more novel the exposure or tech, and the greater its potential impact, however infrequent, the greater the perception of its risk. Needless to say, this can be a function of information exposure and can change over time, (the more it’s talked about, the greater the perception of its risk, and the converse being equally true).

    Second, the topic you raised of low probability high impact risk, comes under the general topic above of ‘impact’, which you’ve addressed at some length. Let’s leave aside the question of ‘probability’ for the moment, and address ‘impact’. As stated above, the greater the potential impact of a thing, the greater will be the popular assessment of not only the magnitude of risk but often its probability and acceptability as well. This is especially true of exposures or technologies about which we know very little and therefore have greater uncertainty. Impact is therefore a question of perception, and is affected by factors such as how little we know of it and how much uncertainty we have about it, and how many people it can affect at once. Importantly, the Goldstein article you cite in turn illustratively cites Ken Rogoff’s dissertation on the BP oil spilll in the Gulf of Mexico. Elsewhere, other illustrations include bridge collapses, stock market crashes and nuclear reactor meltdowns – in other words events characterised by mass casualties, and not those associated with single or even multiple mortalities at a time. Our disaster movies are about whole populations being wiped out. Movies about isolated casualties are confined to the murder, thriller or horror genre. This is how we popularly assess impact – mass casualty. It is also how we calculate acceptability or the lack thereof.

    Third and finally, those who are trained to publicly communicate and intervene in times of crisis know that, apart lack of knowledge and great uncertainty of a threat, other factors that augment our perception of dread include its inevitability (something that we cannot avoid), our lack of control when facing it (there being apparently little we can do to protect ourselves or our loved ones from its impact – think incoming nuke or asteroid), and apparent indifference to our plight by those in authority to intervene on our behalf (studies show that among the most terrifying nightmares are those in which you perceive a threat to which others are oblivious or even seem to invite).

    What might these three issues suggest for driverless tech? First, most of us will have a poor assessment of the real risk, up or down, but will lack the introspection to perceive just how poor our risk assessment is. Nothing new here; this is pretty much how we roll. Second, unless driverless tech can be weaponised to involve mass casualties, we are more likely to see it as low impact (isolated casualties only) and down regulate our concerns over true risk – rather like we do with human driver casualties (human driving is in fact one of our higher risks). Unless and until driverless tech can be weaponised, for example hacked by a hostile and used against a community, then we are less likely to feel personally threatened or vulnerable. Third, to the extent that we learn more about how it works, have control over its adoption (ie we’re not being compelled to go driverless without our consent), have some capacity for human override or intervention to protect ourselves and our loved ones if things go awry (ie take control of the car if/when we feel threatened), and feel that manufacturers, regulators and law makers are responsive to our concerns, then taken together, we are less likely to feel threatened by it, and more likely to support its development and eventually adopt it when we are ready. And, like most new tech, mass adoption will come at that inflection point at which a critical mass, however defined, adopt the tech.

    In sum, Uber did the right thing in suspending testing on public roads pending a review.

Add a Comment

Log in to comment (TMO, Twitter, Facebook) or Register for a TMO Account