Autonomous Vehicles Might Develop Superior Moral Judgment

Autonomous car concept

Much has been written now about the moral guidance for autonomous cars and trucks. I explored some of the issues myself last year in: “What Would Happen if a Future Apple Autonomous Car Made a Very Bad Decision?” There I addressed, to first order, some of the liability and political issues.

Autonomous car concept

Going forward, the specific details of a moral decision about life and death decisions presents a difficult problem that involves quantifying then instantiating into software logic for these important decisions. It would be nice for society to have more time to ponder, but the pace of technology leaves us precious little time for that. In what follows, I’ll look at the research then toy with a variation of Asimov’s Three Laws of Robotics in order to pose the Big Question.

First off, I should emphasize that this is a subject that is being scrutinized more and more. Research is being done, thousands of automotive engineers at the major car companies plus Google, Tesla, and likely Apple are working on the problem. And it’s fascinating.

Other entities are looking into the morality issues surrounding autonomous vehicles. I would think that state and federal government, the insurance companies, and the U.S. Society of Automotive Engineers are pondering the issues. And so some fanciful notions that I present later are simply to get more discussion going.

Here are some current, candidate mechanism for developing autonomous vehicle morality.

1. Prescribed Morality

There are several ways to attack this problem. One, perhaps the most traditional and obvious way, is to create a set of man-made rules. The problem with that is that different people have a different point of view, and there may be reasons to believe there is no practical resolution that fits all.

In 1942, the distinguished scientist, a Ph.D. in biochemistry and science fiction writer, Dr. Isaac Asimov ginned up the Three Laws of Robotics. My perception has been that the original motivation was to simply support his sci-fi writing. Later, the real prospect of actual robots in this century has cast his laws from the realm of fiction to serious conversation for 21st century robot developers.

Alas, that was 3/4 century ago, and we have new tools at our disposal as well as new ways of handling technical issues. What seemed simple and decisive then may not apply today. Also, deference to the wisdom of a few experts is not as popular nowadays, and yet there may still be valuable insights to harvest.

2. Crowd Sourcing

One of the ways modern researchers are attacking this issue is to conduct situational experiments with a large number of people in order to understand, in a statistically valid way, the most prevalent moral values of human drivers. That can product interesting results. For example, in this article at Scientific American: “Driverless Cars Will Face Moral Dilemmas,” the following result was cited.

Most of the 1,928 research participants in the Science report indicated that they believed vehicles should be programmed to crash into something rather than run over pedestrians, even if that meant killing the vehicle’s passengers.

Yet many of the same study participants balked at the idea of buying such a vehicle, preferring to ride in a driverless car that prioritizes their own safety above that of pedestrians.

Humans don’t always crystalize their values in consistent ways. More on that later.

Along that line, an article at the MIT Technology Review is entitled “Why Self-Driving Cars Must Be Programmed to Kill.” That seems callous, but it’s an emerging fact. The question is, “Who and under what circumstances?”

3. Morality Engines

A third related avenue is to build AI agents that are able to, by exposure to a great number of test case human scenarios, learn to build an ethical foundation and then exercise it. See: “When should driverless cars kill their own passengers?” To quote:

The Machine is comprised of a series of ethical dilemmas, most of them making me feel uneasy when answering them. For example, what do you do when a driverless car without passengers either has to drive into a stoplight-flaunting dog or an abiding criminal?

One of the virtues (or drawbacks, depending on one’s point of view) of a morality engine is that the decisions an autonomous vehicle makes can only be traced back only to software. That helps to absolve a car maker’s employees from direct liability when it comes life and death decisions by machine. That certainly seems to be an emerging trend in technology. The benefit is obvious. If a morality engine makes the right decision, by human standards, 99,995 times out of 100,000, the case for extreme damages due to systematic failure causing death is weak. Technology and society can move forward.

Very likely, the evolution of autonomous car morality will be a mixture of methods #2 and #3 above. It’s a technique that both reflects the values of people polled and instantiates that morality into an AI agent that does the best possible job. Even so, I am going to go out on a limb, for the sake of exploration and discussion, and propose a starting point for method #1. I’m doing this because the best way to start a discussion is to built a straw man.

Page 2:  Test case –  Martellaro’s Four Laws for Autonomous Vehicles

9 thoughts on “Autonomous Vehicles Might Develop Superior Moral Judgment

  • @gnasher729: I agree with your like of thinking. Much is said of the “morality” of the machines and worry that it will be less than that of a human driver. We all see (nearly everyday) human drivers operate their current machines in arguably immoral ways. Speeding, sudden lane changes, ad nausium. The vast majority of us really try to do the best we can when driving but as humans, when faced with the time constrained choice of “who do I hit?” are unlikely to make a substatially better choice than a machine following yours or some other set of rules.

    I quibble with the order of your set of rules, however. (This is where we get into societal norms) I would move rule 1 to position 2 or even 3. The passengers are responsible for choosing to go by way of car (autonomous or otherwise ). I think they bear the primary responsibility for initiating the journey and should bear the greater risk.

  • I keep seeing articles positing moral choices for automated vehicles, but they miss the point. We would no more program a vehicle to make such choices than we would teach drivers to. We don’t tell teenagers in Driver’s Ed to hit one pedestrian in order to avoid a group of three. We teach teenagers to drive responsibly in order to avoid such dilemmas.

    Automated systems need to be (and in practice are) designed to operate within safe boundaries or not to operate at all. The whole challenge is in determining the conditions under which the system can function safely. Tesla’s mistake was in designing a system that requires human supervision without designing into the system a mechanism to ensure that the driver was always paying attention. This is why we have regulations that set minimum performance requirements for safety.

  • Morally speaking, the only moral death is the one you choose for yourself that saves the lives of those with you.

    Machines aren’t alive and thus cannot make a moral choice of death. All machine choices that end in death are by definition immoral.

  • Your’e looking at the problem from the complete wrong angle. There is no matter of morality here. There are some people _talking_ very loudly about moral problems, and for some reason they get a lot of attention, but looking at the problem this way is wrong.

    The first law and only law of a self driving car is: Avoid hitting things. That’s it.

    To implement the first law of driving, the self driving car isn’t allowed to drink and drive, and it’s not allowed to drive blindly, and it isn’t allowed to go into tricky situations at high speed. You may say that hitting a lamp post and hitting a child are different things, but as long as you avoid hitting them you are fine and no distinction is needed.

    Now collisions will not be totally avoidable, because the self driving car is surrounded by idiots. Every driver knows the feeling 🙂 By driving carefully, the self driving car will avoid situations where it causes damage. It may react better in situations where a human driver just thinks “oh shit oh shit oh shit what am I doing now” – I’ve had situations like that, and I suppose a self driving car would know at all times what’s left, right and behind and would often be able to take evasive action that I couldn’t. Now accepting that collisions are not totally avoidable, we can add a rule: “When hitting things, minimise the damage. Damage is calculated from speed, whether the thing looks human, and whether the thing looks big and hard”.

    If done right, a self driving car will never get into a situation where “morality” would come into play. If it does: The highway code (or whatever it’s called in the USA) doesn’t mention morality. The rules that I learned: Don’t hit people. Don’t put people into danger. Watch out for elderly people and children. That’s all humans need to know, no morals needed. That’s all that a self driving car is needed.

  • I don’t know how many times this needs to be said, but I guess it bears repeating because engineers seem to either keep forgetting or fail to grasp the concept all together: morality is derivative of human compassion and it is not a faculty of logic. AI will never be in possession of ‘morality’, just its facsimile. *Anything* that it does will be the result of programming by us, at least at the root level, autonomy is a misnomer (something the silicon valley hype train excels at). So the answer to this question is a resounding no. By its very nature (which is math-based and therefore logic based) software will never be capable of ‘morality’, only what we tell it morality is (and yes, human beings are capable of spontaneous compassion in spite of their ‘programming’, software never will be capable of spontinaety, period). Look no further than collateral damage from drone strikes for evidence, and those are by and large still mostly human controlled. It’s science fiction, folks, and it would be the epitome of stupid to think otherwise.

  • Consumer Watchdog is petitioning the NHTSA to slow down on its fastback stance of letting Autonomous Vehicle development continue without much in the way of regulatory oversight.

    http://www.consumerwatchdog.org/resources/ltrrosekind072816.pdf

    At he end of the day, consumer rights organizations and the insurance industry will slow Autonomous vehicle development down to a 10 to 15 year introductory schedule instead of the 5 to 10 year time plan that you have stated in an earlier podcast. Can’t see the trucking industry getting on board with this due to union issues as well as insurance and safety issues for another 10 to 15 years at least. Not sure how you came up with a 5 year time plan. Way too ambitious of a time plan.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

WIN an iPhone 16 Pro!