More and more, we’re seeing headlines about the dangers of artificial intelligence from credible sources such as Stephen Hawking, Elon Musk and Nick Bostrom. At the same time, IBM’s AI “Watson” is being used in medical research and has recently been made available to software developers from all industries. An autonomous program bought illegal drugs online, for which the programmers took full responsibility, even though they had not programmed it to do so.
There are many people who see the benefits of a computer system intelligent enough to solve problems of resource allocation, economic instability, and cost-benefit/risk-reward scenarios but concerned about how to ensure the system makes ethical choices that will benefit humanity.
It’s my view that law is intended to codify our ethics into rules to regulate our complex society while permitting freedoms essential to continual improvement of the human condition. So, I don’t seek to teach the AI to be good. I want to teach the AI to obey the law.
Software’s job is to follow rules. Let’s make sure autonomous systems follow our rules: the law. And along the way, we may discover better ways to design and express our laws with the help of computers.
I’m seeking to answer the following questions:
- Could an AI obey Canadian law? Can the statutes be clearly parsed? Are there contradictions that might confuse a strict interpretation, or loopholes that we are unaware of?
- Is obeying the law in fact a cost-benefit calculation based on risk of being caught and the severity of the penalty? If so, how could we punish an AI?
- Could programming the law into an AI as a set of algorithms permit us to run analysis to find those contradictions, holes and unforseen consequences through simulation?
- In order to reach the level necessary to interpret law, can a machine be built to score high enough on the LSAT to be admitted to law school? Can it do so under the same conditions as a human test taker (no access to the Internet, strict time limits)?
- Can the complex laws be simplified and better codified, leading to easier human understanding, more efficient coding in the AI, and faster court proceedings?
- If such a system is realized, then new bills proposed to be added to the law could be simulated in an AI, revealing necessary consequences or court cases with undesirable results.
I hope it’s clear that none of my questions revolve around the consciousness of an AI, or worrying about an AI destroying humanity. My concerns are the practical things that today’s machine learning systems can do, and what they’ll be able to do in the near-future as algorithm-following data processing computers.