The first thing to mention is that this post is the current state of affairs in February 2017.
MIRI, the Machine Intelligence Research Institute does foundational mathematical research to ensure smarter-than-human artificial intelligence has a positive impact. In their recent newsletter, they pointed to two articles that give contrasting opinions.
The World Economic Forum’s 2017 Global Risks Report includes a discussion of AI safety: “given the possibility of an AGI working out how to improve itself into a superintelligence, it may be prudent – or even morally obligatory – to consider potentially feasible scenarios, and how serious or even existential threats may be avoided.”
In contrast, the JASON advisory group reports to the US Department of Defense that “the claimed ‘existential threats’ posed by AI seem at best uninformed,” adding, “In the midst of an AI revolution, there are no present signs of any corresponding revolution in AGI.”
One of the best books on the topic is Superintelligence by Nick Bostrom who confronts many different scenarios in which computer research may lead to a system capable of greater than human intelligence. He stakes a middle ground between the alarmists who foresee every danger of A.I. and those who assure us that it is only in science-fiction that those things happen. Bostrom’s rational approach is to ask: regardless of the timeline, will the eventual development of A.I. that surpasses human capacity lead to dangers? If so, how much danger and what ought we to do about it?
My position on artificial intelligence is that we have already met and failed at this problem, and we should acknowledge those failures in order to learn from them. We have not been destroyed by Terminators yet, so let me explain.
Nick Bostrom defines many ways in which a superintelligent computer system might work. The two obvious ones that spring to mind are Terminator-style humanoid robots with computer brains and the Skynet that created them: a self-contained software program that runs over several banks of servers or across a network. Today, we have systems like IBM’s Watson and Google’s DeepMind that solve problems within their own specialities, but very few androids who can even stand well.
However, this is not the end of the list. A collective superintelligence is an amalgamation of lesser intelligent systems. An army of ants displays a collective superintelligence (compared to any individual ant) also called “emergent intelligence”. Amazon and Netflix could not recommend products to you unless thousands of other shoppers were tracked. Much of machine learning is statistical analysis of the collective behaviour of groups because regardless of their individual thoughts and opinions, the wisdom is in the trend.
A computer system might be a collective superintelligence because each computer is not faster or have more data than a human, but when networked together they can outperform people. This is how Watson won Jeopardy!.
One of the reasons I joined the field of law is that I believe the legal system is one of the successful artificial intelligences we have created. It is superintelligent because the judgment of a single human could not outpace the efficiency of the justice system, and for questions of great import, we ask a panel of experienced judges to weigh in, and they often disagree.
Like the Chinese Room thought experiment, the legal system shouldn’t actually require people to run it. A well-written legal system should perform like mathematics, but for ethics. Given a set of laws and regulations, a person makes a request of the system along with their evidence. That data is analyzed according to the system’s current rules, and referring to precedents that define the parameters of jurisprudence. The output is the judgment of the court. We think it is a miscarriage of justice if the judge is making a personal decision, or if a juror is holding a grudge. We want objectivity and for the system to operate smoothly, enacting the laws as they were intended by our elected representatives, and using previous cases as our guide for consistency.
Legislators write code and the regulatory and judicial system run it.
The judicial branch is a part of our democratic government, and that level is where I believe we have made some serious errors in our design of superintelligence. The state is an artificial intelligence that provides infrastructure to society, but occasionally protects its interests by killing people. If we were to design a computer system, and said that about once a decade it will lead to the deaths of thousands of people when it believes it is threatened, would anyone allow the project to proceed?
To clarify: I am not advocating anarchism in this argument. On the contrary, I include myself in those who believe that the development of the governmental superintelligence that distributes healthcare and funds bridges and rescues people from natural disasters and all of those other benefits are worth the occasional war if it is fought for ethical reasons (See Just War Theory). Most of us agree that our great works are worth defending by force. Most of us are not extreme pacifists because we also agree that self-defence is also ethical. If I am attacked in the street, I am justified in using force even though it is the function of the police to protect me. In this way, we should look at the potential for harm of an Artificial Intelligence in the context of the bargain we have already made and happily so.
I say that we have created a superintelligence that has flaws in its design leading to the deaths of thousands and we morally accept that. How might we reimagine the benefits of the state but ensure that people won’t be killed by it? That might be an ethically superior system. That’s the design problem of the computer A.I. programmer.
Another experiment that we have run in artificial intelligence that contains moral failings is the corporation. A corporation is a legal entity, an agent, that has an internal organization and rules that are determined by its mandate and the laws of the country in which it operates. We think of it as a way for people to cooperate and work together for the benefit of the employees in their salaries, the good of the world that benefits from the corporation’s services, and the benefit of the shareholders who invested in the idea and draw profits.
However great the purposes of the corporation may be, the one that is given primacy is the shareholder’s interest in the financial return. That is where we went wrong. Employees are exploited because those who don’t like how they’re treated can leave and be replaced. The world may not benefit from the corporation’s existence as long as the corporation can somehow bamboozle, cajole or manipulate people outside the system into giving it their money. Even if the corporation creates something of value to some people, the world may be at a net loss if the operation of the corporation creates pollution, destruction or death overall. To cooperate is great. To create a product that could not exist without a team is great. To be employed and paid is great. However, none of those things are part of the fiduciary duty of the directors of your corporation. The duty is to deliver a profit to the shareholders. In the immortal line of Mother in Alien: “All other priorities are rescinded.”
How we approach the programming of a computer algorithm to prevent it from destroying humanity should be informed by our mistakes in programming other artificial intelligences. First, we must acknowledge the equivalence of collective intelligence operating within a ruleset (governments and corporations) as an artificial intelligence. Then, we must examine the outcomes from governments, corporations and any other examples we can identify and whether the rules of such entities could be improved for the good of humanity.
Or, you could take the Canadian legal system and use that to program your robot. In any case, it will take more than Three Laws.