The Trolley Problem

Google’s self-driving cars have prompted questions about the Trolley Problem: How will an AI decide whether to save its passengers or avoid property damage, or will it be able to make a decision at all, freezing between the alternatives?

This question in philosophy is called the Trolly Problem. From Wikipedia:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

Well, how to humans deal with those problems? The whole point of this thought experiment is to explore the ethical dilemma and figure out how a person ought to decide. It’s a tough problem because different people have different answers. In other words, how one makes ethical decisions is problematic for humans, so why would we need an answer for an AI?

In Canadian Pacific Ltd. v. Gill, [1973] S.C.R. 654, at p. 665,  Mr. Justice Spence, for the court, said:
It is trite law that faced with a sudden emergency for the creation of which the driver is not responsible he cannot be held to a standard of conduct which one sitting in the calmness of a courtroom later might determine was the best course.

If your negligent driving (or bad weather conditions) cause death, you’ll be tried in court and may be convicted of manslaughter. In the case of property damage, you or your insurance will have to pay for it. The question of “how does one make that decision” is not even worth asking. We all know that in the split-second before an accident, there is hardly time for rational deliberation by a human, and reflexes predominate. The only real question for the law is: Who is liable?

Liability in self-driving cars may be one of the following:

  • The driver. Like cruise-control, the driver is still responsible for the car’s behaviour and the AI is “helping”. This is the most likely first-step in autonomous driving, and is already occurring with parking assist systems and collision-detection and other safety features.
  • The company who created the AI, whether that AI was sold with the car, or added later. Will a company take on that level of liability? They will if they’re sure of their product. The first company to take such a risk would be rapidly adopted, since drivers would gladly offload the risk and the cost of insurance. In a scenario where the AI driver is especially competent, accidents would rapidly decrease as adoption increased, leading to further pressure to avoid human interference in efficient and safe AI driving.
  • The car’s AI. In a scenario where laws can effectively punish an AI as if they were a person, the AI is considered the driver. This scenario is unlikely until a means of punishment can be devised that would be meaningful for an AI, which may be impossible.

This issue was raised on a recent episode of This Week in Law. There was some interesting discussion regarding the Trolley Problem and related issues. Sam Abuelsamid gave a ridiculous example of two cars approaching from opposite directions with different numbers of occupants on a one-lane road “where one car will have to run off the road”. Where in the world is this road, and why wouldn’t the cars slow down to a pace where they could avoid each other without colliding or killing passengers? Again, I reverse the question to “How would human drivers deal with this problem, and where would the liability lie”? Answer those questions, and you’ll answer the question of how a robot driver should address it.

It was interesting to hear that sophisticated sensors such as radar and lidar have problems in bad weather, too. My immediate reaction, though is: humans have a great problems driving in bad weather too. Drivers have only their eyes and brain to make judgements in a car. If your arguement that self-driving cars are dangerous because radar or camera is caked with snow, let’s examine whether you can drive if your windshield is caked with snow. That’s why we have windshield wipers and defrosters. This is not an insurmountable problem… it’s not even a long-term problem.

In fact, they discuss a few minutes later V2V or Vehicle-to-vehicle communication, in which a car could indicate to all other vehicles nearby where they are, what dangers they detect, and how to coordinate traffic. If that technology is widely implemented, then driverless cars would enhance their perception in ways that humans could never accomplish: near-instant collaboration and verification.

My bigger complaint is I always get frustrated by people like Ben Snitkoff who raised an argument I’ll paraphrase as “I like driving. I don’t want to live in a future where I can’t drive wherever I want.”

Ben, I’m sure that some people really liked riding horses into town. You can still ride horses on people’s private property, and you can even see professionals race horses, and you can take lessons. What you can’t do is ride a horse in a city or a highway. Ask any modern horse-lover whether they’d like to be on the road with cars, or if they think it’s worth it to drive out to the countryside to gallop in freedom. That’s where you’ll be in 20 years with your gear-shifting inefficient human driving. I’m sure you’ll love it. We’re not going to forego the millions of lives saved and the enormous efficiency gained for your enjoyment.

I’m confident we’ll figure out the liability issue. I don’t believe the sensor problem will cause more than a hiccup in the design process. And I think we’ll soon be taking driverless taxis just like I get on a driverless Skytrain here in Vancouver. Someone probably said human train conductors would always be necessary…