Asimov’s 4th Law of Robotics

It seems Isaac Asimov didn’t envision needing a law to govern robots in these sorts of life-and-death situations where it isn’t the life of the robot versus the life of a human in debate, but it’s a choice between the lives of multiple humans!



Asimov header

Like me, I’m sure that many of you nerds have read the book “I, Robot.” “I, Robot” is the seminal book written by Isaac Asimov (actually it was a series of books, but I only read the one) that explores the moral and ethical challenges posed by a world dominated by robots.

But I read that book like 50 years ago, so the movie “I, Robot” with Will Smith is actually more relevant to me today. The movie does a nice job of discussing the ethical and moral challenges associated with a society where robots play such a dominant and crucial role in everyday life. Both the book and the movie revolve around the “Three Laws of Robotics,” which are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

Asimov coverIt’s like the “3 Commandments” of being a robot; adhere to these three laws and everything will be just fine. Unfortunately, that turned out not to be true (if 10 commandments can not effectively govern humans, how do we expect just 3 to govern robots?).

There is a scene in the movie where Detective Spooner (played by Will Smith) is explaining to Doctor Calvin (who is responsible for giving robots human-like behaviors) why he distrusts and hates robots. He is describing an incident where his police car crashed into another car and both cars were thrown into a cold and deep river – certain death for all occupants. However, a robot jumps into the water and decides to save Detective Spooner over a 10-year old girl (Sarah) who was in the other car. Here is the dialogue between Detective Spooner and Doctor Calvin about the robot’s decision to save Detective Spooner instead of the girl:

 

Doctor Calvin: “The robot’s brain is a difference engine[1]. It’s reading vital signs, and it must have calculated that…”

Spooner: “It did…I was the logical choice to save. It calculated that I had 45% chance of survival. Sarah had only an 11% chance. She was somebody’s baby. 11% is more than enough. A human being would have known that.”

 

I had a recent conversation via LinkedIn (see, not all social media conversations are full of fake news) with Fabio Ciucci, the Founder and CEO of Anfy srl located in Lucca, Tuscany, Italy about artificial intelligence and questions of ethics. Fabio challenged me the following scenario:

 

“Suppose in the world of autonomous cars, two kids suddenly run in front of an autonomous car with a single passenger, and the autonomous car (robot) is forced into a life-and-death decision or choice as to who to kill and who to spare (kids versus driver).”

 

What decision does the autonomous (robot) car make? It seems Isaac Asimov didn’t envision needing a law to govern robots in these sorts of life-and-death situations where it isn’t the life of the robot versus the life of a human in debate, but it’s a choice between the lives of multiple humans!

A number of surveys have been conducted to understand what to do in a situation where the autonomous car has to make a life-and-death decision between saving the driver versus sparing pedestrians. From the article “Will your driverless car be willing to kill you to save the lives of others?” we get the following:

 

“In one survey, 76% of people agreed that a driverless car should sacrifice its passenger rather than plow into and kill 10 pedestrians. They agreed, too, that it was moral for autonomous vehicles to be programmed in this way: it minimized deaths the cars caused. And the view held even when people were asked to imagine themselves or a family member travelling in the car.”

 

While 76% is certainly not an over-whelming majority, there does seem to be the basis for creating a 4thLaw of Robotics to govern these sorts of situation. But hold on, while in theory 76% favored saving the pedestrians over the driver, the sentiment changes when it involves YOU!

 

“When people were asked whether they would buy a car controlled by such a moral algorithm, their enthusiasm cooled. Those surveyed said they would much rather purchase a car programmed to protect themselves instead of pedestrians. In other words, driverless cars that occasionally sacrificed their drivers for the greater good were a fine idea, but only for other people.”

 

Seems that Mercedes has already made a decision about who to kill and who to spare. In the article “Why Mercedes’ Decision To Let Its Self-Driving Cars Kill Pedestrians Is Probably The Right Thing To Do”, Mercedes is programming its cars to save the driver and kill the pedestrians or another driver in these no-time-to-hesitant, life-and-death decisions. Riddle me this, Batman: will how the autonomous car is “programmed” to react in these of life-or-death situations impact your decision to buy a particular brand of autonomous car?

Another study published in the journal “Science” (The social dilemma of autonomous vehicles) highlighted the ethical dilemmas self-driving car manufacturers are faced with, and what people believed would be the correct course of action; kill or be killed. About 2000 people were polled, and the majority believed that autonomous cars should always make the decision to cause the least amount of fatalities. On the other hand, most people also said they would only buy one if it meant their safety was a priority.

 
4th Law of Robotics

Historically, the human/machine relationship was a master/slave relationship; we told the machine what to do and it did it. But today with artificial intelligence and machine learning, machines are becoming our equals in a growing number of tasks.

I understand that overall, autonomous vehicles are going to save lives... many lives. But there will be situations where these machines are going to be forced to make life-and-death decisions about what humans to save, and what humans to kill. But where is the human empathy that understands that every situation is different? Human empathy must be engaged to make these types of morally challenging life-and-death decision. I’m not sure that even a 4th Law of Robotics is going to suffice.

[1] A difference engine is an automatic mechanical calculator designed to tabulate polynomial functions. The name derives from the method of divided differences, a way to interpolate or tabulate functions by using a small set of polynomial coefficients.

Original. Reposted with permission.

Editor: See also KDnuggets Poll The Surprising Ethics of Humans and Self-Driving Cars, where respondents were are much more willing to ride in a self-driving car that might kill them to save several pedestrians than in a car that would save them but kill pedestrians.

Related: