What would Isaac Asimov say?

Autonomous-Vehicle

Robots are entering our everyday lives. An obvious example is the self-driving car that is fast becoming a reality on our streets and highways. There is little doubt, both from analysis and actual tests, that robotic drivers are superior to humans in almost every way. They think faster, are never distracted, have immeasurably better situation awareness and – most importantly – know the current capabilities of the machine they are controlling far better than even a professional driver. But wait, did you notice that tiny proviso, “almost”? The problem involves situations in which an ethical decision must be made.

Let me cite an actual example. Suppose the robotic car is driving on a two-lane road where there is traffic in the opposite direction and pedestrians walking on the right hand side. Suddenly, a young child leaps onto the road to retrieve her ball. The robot instantly knows that there is not enough time to brake and avoid the child. But swerving to the right endangers the pedestrians and possibly the car’s occupants. While swerving to the left is even more dangerous to everyone involved. What should be done? More specifically, how should the designers of the robot’s software prepare for such a situation?

The general principles for the ethical robot are popularly attributed to science fiction author Isaac Asimov, where they appeared in his 1942 short story “Runaround”, although earlier provenance has been alleged. He quoted them from the “Handbook of Robotics, 56th Edition, 2056 A.D. as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

So, how might they be adjusted to account for the above dilemma? One idea is to add a corollary to the first law. When an intrinsic conflict arises in applying the first law, employ the principle of “The Greater Good (TGG)”. In other words, seek to minimize harm. That might certainly be what an ethical human might do, isn’t it? However, as usual, the devil is in the details. Everyone evaluates situations both from a personal and a social perspective. Someone might quite plausibly give greater weight to the survival of himself or his family than he would for random strangers. And in the case of the child foolishly chasing her ball, if my family were in the car, my strong reaction would be, “Run over the stupid little bugger!

If robotic cars are to become a significant reality, both the manufacturers and government regulators must understand and account for the reaction of the public to this fundamental change. This has been the subject of a fascinating peer-reviewed academic study that you can read here. The results are probably unsurprising. Participants in the study approved autonomous vehicles that follow the TGG principle, even those which sacrifice their passengers to save others. However, they preponderantly wouldn’t buy such a vehicle for themselves. Consequently, as the study notes, adopting this principle will likely meet strong consumer resistance and will thereby actually increase casualties by postponing adoption of this safer technology.

Auftritt1I am unable to resist commenting on the methodology of this study, which employed the Amazon Mechanical Turk. When I read that my interest perked up. I did know about the original Mechanical Turk, which was a fake chess-playing machine constructed in the late 18th century. It appeared to perform wondrous feats until it was exposed as a fraud, with a concealed all-too-human chess master inside. The Amazon tool is a marketplace for work requiring human intelligence on simple tasks that a computer cannot effectively perform. Thus a sufficiently large group of humans can simulate a computer but with greater efficiency. You can read about it here, or perhaps even volunteer to earn some spare change. It is surprising how many such tasks exist, and scientific and academic studies such as this one are prime candidates.

Another resolution for the dilemma is to employ a fail-safe option. If a robot has a situation where any course of action, including doing nothing, threatens harm to humans, it just exclaims, “Oh shit! I give up. Human, you have the con.” For this to work, a full-time, attentive human monitor is needed and much of the promise of this automation will not be achieved. Practically speaking, if you take the situation I cited originally, it is unlikely that the human would even be able to respond effectively. More likely a random result would occur, perhaps even worse than if the human had been in charge all the time. And some personal injury lawyers will get rich.

In subtle ways accommodating these real-world constraints limits the whole concept of robot autonomy. The way that robots develop and learn to deal with situations that their designers cannot completely anticipate is to employ artificial neural networks. These are decision-making systems that learn from example rather than only obeying predefined rules. The problem involves a technical aspect called node-weighting. I won’t try to explain it here but it is comparable to the ways humans learn to apply ethical and practical principles. Either we permit this technology, thereby gaining the full advantage of robotics, or we constrain or discard it, essentially eliminating much of the usefulness of robotics.

Advertisements