What Would Happen if a Future Apple Autonomous Car Made a Very Bad Decision?

| Particle Debris

All the technical signs point to a future with autonomous (self-driving) cars. Just about every major car company is working on that technology including, we suspect, Apple. But what would happen, hypothetically, if one of these cars were to make a bad mistake in software judgment that injures someone? The legal and ethical issues are enormous and worthy of deep exploration.

This week, Tim Bajarin, the President of Creative Strategies, Inc starts off the discussion with a very thoughtful essay. " Autonomous Cars and Their Ethical Conundrum."

Mr. Bajarin starts off by noting:

...we are years away from getting autonomous cars on the road and getting the right kind of government regulations passed to make this possible. But the technology is getting close enough to create these types of vehicles and, in theory, they could be ready for the streets within the next three to five years.

It's good that we have these early signs now because there are many ethical and legal issues to be worked out. An excellent example of this is the hypothetical case that he posed.

Let’s say that I am in a self-driving car. It has full control, and the brakes go out. We are about to enter an intersection where a school bus had almost finished turning left, a kid on his bike is in the crosswalk just in front of the car, and an elderly woman is about enter the crosswalk on the right. How does this car deal with this conundrum? Does it think, “if I swerve to the left, I take out a school bus with 30 kids on it. If I go straight, I take out a kid on a bike. If I swerve right, I hit the little old lady?” Is it thinking, “the bus has many lives on it, and the kid on the bike is young and has a long life ahead, but the elderly woman has lived a long life, so I will take her out” as the least onerous solution?”

Ponder that for a minute.

Also, not considered here, and I think it should be, is the more important question of the car's responsibility to its owner. By that I mean the car in the above scenario might well solve the problem by assessing which course of action would lead to the least danger of injury to its own passengers, not the external people.

It gets more interesting. Questions that are likely to come up include:

  • Will all autonomous cars be required to have 360 deg cameras and even more estensive data recorders to log the last minute before an accident?
  • Will the computer logs be definitive in a court of law even if reliable eyewitnesses contradict the logs? (See, for example, "How A Little Lab In West Virginia Caught Volkswagen's Big Cheat.")
  • In the event of a software ethical failure, who bears the liability of the car's actions? The manufacturer or the owner? That will make for an interesting EULA.
  • How is the list of ethical priorities constructed when it comes to deciding who (or what) will end up being damaged in certain kinds of emergencies? Is the life of a beloved dog in your car more valuable than another person's Ferrari?

  • How will the insurance industry weigh in on (or influence) the car industry's attempts to generate ethical rules that could be financially unfavorable insurance-wise?

No one knows all the answers today, but Mr. Bajarin recounts how at a Mercedes-Benz’s North American R&D event, the argument was made that we need philosophers to help sort out these issues.

We've already seen how the emergence of smartphone technology has challenged the tech industry when it comes to compromises between privacy, security and profitability. Is that current balance (or imbalance) a template for the future? Or is it a sobering warning sign that we need to to much better when people's lives are at stake?

It's going to be an interesting ride.

Next page: the tech news debris for the week of October 19. Apple's next target industry could well be....

Popular TMO Stories

Comments

JustCause

As I stated before autonomous cars should have three modes:
1) autopilot - the user need do nothing except enter destination/route and the user can be anywhere in the car, liability for accidents etc is the car manufacture
2) accident avoidance - the user is in control, but the car will not allow an accident to happen, liability for accidents etc is the car manufacture
3) manual - user is in complete control and can do damage, liability for accidents etc is the current driver
This allows for those that want to drive, to be able to drive but also has the advantage of lower accident rates and even allows for people to sleep, read and/or TXT.

Jamie

Yes, the lack of deeper considerations seems to be a disease these days.

Unfortunately, I don’t think silicon valley by and large has that much foresight or that degree of conscience. I suspect they will try to push autonomous through prematurely (in the name of profit, not innovation) - we can only hope legislators at both local and federal levels have the sense and cojones to tell them, ‘No, not yet.’. Pretty tired of the adherents trumpeting hypothetical safety numbers all the time, it is beyond dogmatic and ridiculous - it remains to be seen how useful the tech even is in the real world. Again I say, autopilot on planes and trains are very different beasts with vastly different applications. Talk about FUD.

geoduck

The future of self driving cars is not as far away as some think, and the ethical and liability questions are here…now.
http://www.bbc.com/news/technology-34603364
Videos posted online appear to show Tesla’s new self-drive mode causing some cars to drive dangerously.

JustCause

@geoduck Totally agree, Tesla (and some makers before that) have been adding autonomous features. Now some will say it’s stupid, but I love that Tesla is being loud about it.  Autonomous driving is here and with cars being in real world situations with real world people (you may enter idiots) it will force the software and laws to catch up. Personally I love it!!

d'monder

At least the autonomous car would THINK.  How many human drivers would just freeze, with the brake pedal mashed to the floor, as they run into whatever’s in front of them?

As for the scenario, if you’re driving anything weighing less than 10 tons you’re going to lose to the bus. smile

geoduck

“Why Apple Stores Won’t Work
Hysterical. It’s also a good reality check for those, and I include myself on occasion, that think Apple is blowing it by trying product X, Y, or Z.

BlueRay
I can honestly tell you the only time I’ve bought a BluRay disk was when I picked it up by mistake, and it was exchanged for a DVD. Come to think of it I haven’t bought a DVD in at least five years. I stream all I want. UHD-BlueRay? Pass. I think you, and the industry, will be surprised about, a: how many people opt for HDTV over 4KTVs given the cost difference and shortage of 4K material, and b: how many of the 4Ks that are sold spend 99% of their time showing HD or even SD stuff. 4K will come but I’m not even putting it on my personal radar for another five years.

Lee Dronick

At least the autonomous car would THINK.  How many human drivers would just freeze, with the brake pedal mashed to the floor, as they run into whatever’s in front of them?

I saw that this morning. I didn’t see the initial rear end collision, but almost a secondary. I was stopped at a traffic light and saw the commotion at the next intersection as emergency flashers went on and people got out of their cars. I moved slow up to the scene, but the car in the adjacent lane moved fast and almost rear ended the rear ending. It was like she wasn’t looking further than her car’s hood.

geoduck

d’monder
Or worse yet when they stand on the gas THINKING it’s the brake. At least an autonomous car is unlikely to do that.

redptc

A bigger conundrum would be the scenario of a politician, a lawyer and a banker across the intersection.  Would the car back up to hit them all?

CudaBoy

I brought this exact point up here months ago. If all the choices are bad, who’s to blame vis a vis the “decision”. So far the mfgs Mercedes, Tesla, Google are taking full responsibility. The school bus example above is a bad example because brakes go out on non autonomous cars all the time and courts issue on that all the time. And, the Model S would take you off Autonomous if there were any question as it does now at exits and roundabouts. Just like ABS and traction control - 2 expensive former options that are now mandated, the same will happen to level one autonomy - it just makes sense. If “all” the cars on the roads had this relatively simple first level of sensors they would talk to each other and travel as “one” multi-celled convoy with nary a hiccup. It will be humans that screw it up - hey film me for Youtube as I jump in the back seat of my Model S while it drives itself. (of course sensors in the seat will kill auto mode as soon as it senses you tries to leave your seat)

Dean Lewis

In the case of an autopilot failure in an airplane, the pilot is at fault. FAA regulations require the pilot to be alert at all times and take over in the case of an emergency. They have co-pilots so they can sleep in shifts if necessary, and they are not to be reading and texting and such, either.

Despite being different beasts, I suspect autopilot/self-driving/autonomous automobiles will be the same. Driver error will be cited whenever they don’t correct for anything the autopilot system fouled up or didn’t do right. And, in fact, pilot error is also the cause of many accidents when they overcorrect or outright override the autopilot thinking it is wrong when it isn’t. There will have to be autopilot lessons during driver’s education classes.

Now, that doesn’t preclude the airline from suing the autopilot manufacturer after in order to get them to shoulder some of the costs involved or try to remove blame entirely, but I’m not sure if such a case has ever been filed or, if it has, won. I’ll have to research that or ask my father who worked in the Washington DC FAA office crafting many of the regulations.

Lee Dronick
jeffff

Is there any consumer interest in autonomous cars? Can you envision yourself hurtling through an urban highway at 70 mph and completely trusting the car? I think the first fatalities will doom these things faster than Google Glass.

prl99

I guess none of you ever watched Will Smith in I, Robot. Everybody was riding in autonomous cars so the traffic was very smooth and nobody had an accident, well (or should I say Will) until Will Smith’s character took his car off auto-pilot to get away from those robots who were programmed to ignore the three laws (sounds like the NSA to me) at which point there was carnage all over the place.

People have to take a couple chill pills when talking about the future because nobody knows exactly what it will look like, how thinks might or might not operate, or whether the highways and major roads or tomorrow will even allow individually driven or vehicles. We might end up with auto-trains where your vehicle is added to a “train” of connected vehicles/cars that are driven as one under the control of a master train network, just like many trains are handled today (which still have collisions). In many ways, this will be the only realistic way for the quantity of vehicles we have on the roads today to continue to be allowed. We have gridlock in every major city and running autonomous cars isn’t really going to help this. The only way to reduce gridlock is to either get rid of half the cars or put them together into some kind of moving train where you don’t have to worry about the idiot in front of you continuously tapping on the brakes.

geoduck

jeffff
Autonomous cars are here now.
How many models have a self parallel parking feature?
http://motorburn.com/2012/06/top-five-self-parking-cars/
That’s autonomous.
How many have lane departure warning systems or features that slow the car for you if you get too close or other automatic safety systems that take control from the driver if they screw up?.
https://www.usaa.com/inet/wc/advice-auto-safetyfeatures
These automated systems are available now.

I heard an automotive engineer on the radio a few months back talking about this. His opinion was that we won’t see fully autonomous cars all at once. First will be things like anti lock brakes. Then the features I mentioned above. Then highway only systems that are mostly autonomous but require driver input periodically. They will grow until in ten or twenty years most cars are autonomous. By that time, yes, consumers will expect a high degree of automation much as they expect seat belts and antilock brakes now. It’s those of us driving today that will have the most trouble. In ten or twenty years sixteen year olds will be learning in autonomous or semi autonomous cars.

An added aspect is this: I really believe most people driving today don’t want to be. They want to be somewhere but they’d rather be eating breakfast, or reading, or talking on the phone, or doing any number of things I’ve seen them doing while they’re commuting to work. For a large segment of the population autonomous cars will come as a welcome relief.

ibuck

Who wants autonomous cars?  Elderly people with slightly above-average means will want them as soon as their declining abilities preclude them from safely operating a regular vehicle. They will still want to get around independently, and autonomous cars are a very desirable way to do so. They probably won’t even have to own their own car: they will join a group and car share. And the Boomer generation, which caused massive changes in many segments of society, are quickly approaching that age. Demand will be high.

jeffff

geoduck
I agree that over 20 years,  autonomous cars could be phased in gradually.  But parallel parking doesn’t have much in common with merging onto a busy interstate hands-free.  I also agree that most people may not want to be driving, but that doesn’t mean that they will put their faith in a truly autonomous car.  I will be very interested to see how these technologies develop, and how many of them are. accepted over time.

ibuck
I’m sure it’s unintentional, but your comment could be used as a classroom example of agism.  On the other hand, you might be right!

pattii

The beauty of the autonomous car would be you could be doing other things while it drove you, like a cab driver. If as Dean Lewis above says that it will be like airline pilots where you have to be constantly alert as if you were driving, well, I might as well drive myself then. I think a feature Subaru has now (non-standard) is nice where if the person is driving, it is closing in on an object the car slows itself to a stop. But the driver has to begin forward movement again. They have a back up thing too I think.

wab95

John:

The ethical conundrum that Bajarin posits is indeed one with which autonomous vehicular engineering (AVE) will indeed need to grapple, however the problem is much broader still, and will go beyond issues related to decision-making algorithms used to protect the passengers and society, but infrastructural development to maximally exploit the efficiency gains afforded by autonomous transport (think about networking cars on the same street or grid to minimise wasted space and gridlock whilst avoiding collisions), as well as - and this will be huge and will undoubtedly require new legislation in some countries - liability and questions around indemnification.

Once you transfer responsibility of piloting away from the driver to the machine, particularly should autopilot be comprised of different components made by different manufacturers (e.g. The human-autopilot interface, vs the autopilot-drive train interface), then what party is liable for any event that transpires when the human is not in control, and which parties, if any, are held harmless or liable? And what of the other cars in train? Taking Bajarin’s scenario, if every path forward is blocked, rather than deciding which human will die, including the passengers, it is more plausible that a system would be designed to do a hard stop, whilst minimising injury to the passenger, in a reaction time faster than human. That will only be a safe option, however, provided that that decision is communicated to other autonomous cars, including the on-coming autonomous bus (e.g. The least massive vehicle does the stop, the one with the most momentum is allowed to pass), and a pile up is avoided, likely through communication through a network into which all vehicles are linked. This in turn only works as designed if all vehicles are autonomous. What if there is a mix of autonomous and human-driven vehicles? What if a human-driven vehicle ploughs into the autonomous vehicle in front because the former did a hard stop, and this results in injury? Who is liable under those conditions if both types of vehicles are allowed, particularly if the human-driven vehicle is otherwise within the speed limit and the driver was not distracted, merely endowed with a slower reaction time? Would human powered cars even be allowed on such a grid, or would they be confined to alternate roads?

I like @JustCause’s three modes, although I personally see such as an early to mid-stage of evolution. Eventually, these vehicles are likely to be transitioned to fully autonomous as humans become less and less experienced with driving (skill will undoubtedly decrease over time).

In short, I think we have a long road ahead (no pun intended) before we see consumer rollout of autonomous vehicles, and believe it will likely be staged rather than a fully finalised deployment.

The Cliff Edwards piece on why Apple Retail Stores won’t work is interesting more for its illustration of the bias in data interpretation and reasoning than for its entertainment value. Intelligent people with access to reasonably good data can still come to the wrong conclusions based on their a priori biases, as Stephen Jay Gould demonstrated in ‘Mismeasure of Man’. Rather, objective inquiry and dispassionate interpretation, and not even scepticism, are what is required for non-biased interpretation, and being able to more accurately anticipate the future.

Ewan Spence’s piece on iOS 9’s adoption rate over Google’s Lollipop (and other Android versions) only further underscores the value proposition of the whole widget model, and its key to providing a dominant position to those who wield it effectively. There is little doubt that Google understand this now, as they’ve gone silent on the virtues of the open licensing model of late, and their other competitors, notably MS, have gone whole widget in order to compete in the PC, and perhaps mobile, space against Apple. CNN recently reported Ballmer’s latest rant against Amazon (http://money.cnn.com/2015/10/23/technology/ballmer-amazon-apple/index.html), and his grudging acknowledgement of Apple’s success, no less than his take that only Apple and MS have the capacity to compete on hardware and software (the former for MS is still an open question), and that this is now a two-horse race. Google might beg to differ, but to do so, they too will have to go whole widget if they are retain the requisite control over their OS and its security.

What all of these three themes have in common is the importance of Apple (and any tech company) having a long vision, and having the strategic genius and composition to play the long game, because ultimately, in truly transformative technology adoption, cultural transformation is required, and that is definitely a long term phenomenon.

mrboba1

The hypothetical that Bajarin posed is silly. Are cars really going to be programmed to calculate the ages of the potential crash victims? After the age of 60, I might never leave the house! This is ridiculous. It’s illegal to discriminate based on age for hiring, why would he imagine it would be ok in this case? And also, “take out a bus” - what is he driving, a Hum-V?

While this topic can’t be ignored, a more realistic example should be employed to not just be summarily dismissed.

John Martellaro

mrboba1: As Mr. Bajarin implied, sometimes the essence of the ethical argument isn’t fully explored until one posits an over-the-top scenario.

vpndev

Lee wrote :“It was like she wasn’t looking further than her car’s hood.”

I see this all the time. Here’s a great example of it - around where I live there’s an interstate with express and local lanes and transition pieces between them in certain places. At an express-to-local transition, it’s quite routine to see a car in the left local lane move left into that transition piece. Of course, anyone looking down the road would see that this “third lane” is only a hundred yards, or less, and stay where they were. But no - they’re clearly not looking that far ahead.

Log in to comment (TMO, Twitter or Facebook) or Register for a TMO account