All the technical signs point to a future with autonomous (self-driving) cars. Just about every major car company is working on that technology including, we suspect, Apple. But what would happen, hypothetically, if one of these cars were to make a bad mistake in software judgment that injures someone? The legal and ethical issues are enormous and worthy of deep exploration.
This week, Tim Bajarin, the President of Creative Strategies, Inc starts off the discussion with a very thoughtful essay. " Autonomous Cars and Their Ethical Conundrum."
Mr. Bajarin starts off by noting:
...we are years away from getting autonomous cars on the road and getting the right kind of government regulations passed to make this possible. But the technology is getting close enough to create these types of vehicles and, in theory, they could be ready for the streets within the next three to five years.
It's good that we have these early signs now because there are many ethical and legal issues to be worked out. An excellent example of this is the hypothetical case that he posed.
Let’s say that I am in a self-driving car. It has full control, and the brakes go out. We are about to enter an intersection where a school bus had almost finished turning left, a kid on his bike is in the crosswalk just in front of the car, and an elderly woman is about enter the crosswalk on the right. How does this car deal with this conundrum? Does it think, “if I swerve to the left, I take out a school bus with 30 kids on it. If I go straight, I take out a kid on a bike. If I swerve right, I hit the little old lady?” Is it thinking, “the bus has many lives on it, and the kid on the bike is young and has a long life ahead, but the elderly woman has lived a long life, so I will take her out” as the least onerous solution?”
Ponder that for a minute.
Also, not considered here, and I think it should be, is the more important question of the car's responsibility to its owner. By that I mean the car in the above scenario might well solve the problem by assessing which course of action would lead to the least danger of injury to its own passengers, not the external people.
It gets more interesting. Questions that are likely to come up include:
- Will all autonomous cars be required to have 360 deg cameras and even more estensive data recorders to log the last minute before an accident?
- Will the computer logs be definitive in a court of law even if reliable eyewitnesses contradict the logs? (See, for example, "How A Little Lab In West Virginia Caught Volkswagen's Big Cheat.")
- In the event of a software ethical failure, who bears the liability of the car's actions? The manufacturer or the owner? That will make for an interesting EULA.
How is the list of ethical priorities constructed when it comes to deciding who (or what) will end up being damaged in certain kinds of emergencies? Is the life of a beloved dog in your car more valuable than another person's Ferrari?
- How will the insurance industry weigh in on (or influence) the car industry's attempts to generate ethical rules that could be financially unfavorable insurance-wise?
No one knows all the answers today, but Mr. Bajarin recounts how at a Mercedes-Benz’s North American R&D event, the argument was made that we need philosophers to help sort out these issues.
We've already seen how the emergence of smartphone technology has challenged the tech industry when it comes to compromises between privacy, security and profitability. Is that current balance (or imbalance) a template for the future? Or is it a sobering warning sign that we need to to much better when people's lives are at stake?
It's going to be an interesting ride.
Next page: the tech news debris for the week of October 19. Apple's next target industry could well be....