Can your lawyer code? Change is on the way and hybrid legal advisers are a must — Australaw.com.au

In this blog post by Ashley Kelso, a variety of new technologies are surveyed, along with their legal ramifications.

There has been a lot of talk about how ‘blockchain’, ‘machine learning’, and ‘Internet of Things’ (IoT) will ‘disrupt’ everything. How it will automate jobs, challenge existing business models, and overhaul the legal industry. But there haven’t been a lot of tangible examples of how this will occur. This article aims to provide a snapshot […]

via Can your lawyer code? Change is on the way and hybrid legal advisers are a must — Australaw.com.au

  • Automating Contract Administration: Singling out Ridley and Flux (a Google company), intelligent project-management software will not only optimize schedules and payments, but also the imposition of penalties for non-compliance. See also Kira, eBrevia, Thompson-Reuters Contract Express, Beagle.ai, Legal Robot, and more.
  • Automating Copyright Infringement Detection: Veredictum is used as an example of software that automatically detects unauthorized republication of videos. It uses a blockchain as its mechanism of registration. Other examples of automated copyright infringements detection include: YouTube’s Content ID systemVobile, Attributor, Audible Magic, and Gracenote (see this Wired.com article for a full discussion). The idea of using a blockchain to register artwork is discussed in this TechCrunch article, specifically concerning Verisart.
  • Property and Security Registries: Discussing Ethereum smart contracts
  • Self-Executing Contracts
  • Wills as Smart Contracts
  • Machine Learning as an Assistant

Kelso concludes that “Advisers need to be able to understand how these innovations work so that they can anticipate and deal with the issues that will arise, as established business models are threatened and legislators struggle to keep up.”

I couldn’t agree more.

Robot Lawyers and Judges

You may have heard about the machine learning tool that helped 160,000 parking tickets get dismissed:  AI lawyer shoots down 160,000 parking tickets

Legal Perspective

A seminar by Benjamin Alarie, Osler Chair in Business Law at the University of Toronto, was summarized inn Machines Regulating Humans: Will Algorithms Become Law? (slaw.ca, 13 February, 2017). Alarie began with the following video which illustrates the advances in technology and software over a short period of time:

The pace of technological evolution is accelerating, and although the current state of A.I. may seem impressive (computers winning at Jeopardy!Go and poker), he argues that it is only comparable to the 1976 versions of racing games in the video.

Alarie’s company, Blue J Legal, has achieved a 90% accuracy rate for fact-based dispute resolution using machine learning to predict outcomes. “These determinations are expensive and take a lot of time for humans to make but machine learning algorithms can consider the entire corpus of case law in minutes.”

The article generated a lot of questions, so F. Tim Knight posted some discussion points in a recent update.

To me, the most interesting issue brought up was worry about “normative retrenchment”, locking in the status quo, or intractable codification, however you phrase it. In other words, if an algorithm looks though the corpus of case law and makes a judgment in a case, it will likely continue to make the same judgement in similar cases because each decision (including its own) becomes a precedent. This is the nature of stare decisis (judges should follow precedent), but a judge can always render a decision based on their own analysis, creating new precedent. So far, judges have been human. When judges on the Supreme Court disagree, it is not because they were exposed to different case law or facts. It is because they disagree about the justice of the outcome and anticipating the precedent it will set. New judges are selected from lawyers, and each of them are humans brought up in slightly different cultural contexts and families. And they were raised in a society different from their parents because of the advance of technology at the heart of this discussion. An algorithm that only looks at facts and case law cannot weight some jurisprudence above others based on their life experience.

Knight answers these types of criticisms with an appeal to the potential sophistication of the software, and its current accuracy. If it already can deliver 90% accuracy, then either the judges are rendering verdicts like robots, or the algorithm is predicting the outcomes including human factors. And it will only get better at noticing nuances and situations that are borderline that will require deeper analysis and human judgement.

When evaluating whether an algorithm can decide important matters such as criminal charges, it is important not to hold it to a standard of perfection, because even human judges make mistakes. Some of those mistakes are very human tendencies of racial bias, gender discrimination, the economically conservative trend in the profession, and corruption by bribery or coercion. Some of these may be subtle effects that tip the scale without a sufficient grounds for appeal. An algorithm, despite perhaps making mistakes by not having a human understanding of motivation or other judgment faculties, would nevertheless reduce the human-type errors.

Technologist Perspective

kurzweilaiIn October, futurist Ray Kurzweil‘s site hosted an article entitled, “Will AI replace judges and lawyers?” (kurzweilai.net, 25 October, 2016).

The article mainly reports on a  University College London paper published in PeerJ Computer Science in which a machine learning algorithm had predicted the judicial decisions of the European Court of Human Rights (ECtHR) with 79% accuracy.

From 79% in October 2016 to 90% in February 2017 in fact-based decisions seems like a strong upward trajectory.

Accountability

techcrunchArtificial intelligence and the law  (techcrunch.com, 28 January 2017) contemplated the fact that machines that use reinforcement learning were not really “programmed” by their creators, and that might break the liability between the coder and the algorithm. If it is impossible for the programmer to foresee problems, then they not be found negligent in tort law.

The most interesting snippet from this article is buried at the bottom: In the U.K. the House of Commons Science and Technology Committee stated, “While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now.” The document also mentions the need for “accountability” when it comes to deployed AI and the associated consequences.

Technology Assisted Review

TAR or Technology Assisted Review is another form of machine learning that is currently deployed, and already lowering lawyers’ fees. An article on Quartz took a look at the possible consequences in Lawyers are being replaced by machines that read (qz.com, 25 January, 2017).

A machine learning algorithm can be custom-trained on a case-by-case basis by a few lawyers reading a small selection of possible evidence to decide its relevance.

rather than having many lawyers read a million documents, a few review a percentage of the possible evidence and predictive coding technology uses those answers to guide a computer review of the rest. This eliminates the need for all but a few lawyers to review evidence and assess it, then train machines, rather than lawyers with training eyeballing all the documents.

An industry is growing around TAR, even by a legal temp agency, Update Legal, that is now providing A.I. temps for electonic discovery.

Then again…

Ars Technica published an expose about legal software that has contributed to over two dozen rights violationsLawyers: New court software is so awful it’s getting people wrongly arrested (arstechnica.com, 2 December, 2016).

Apparently, in some parts of the United States, case management software is updated with court proceedings, and relied upon by law enforcement officers to coordinate arrests and releases and to issue court summons. Due to formatting errors, people are arrested on warrants that have been recalled and have wrongfully spent up to 20 days in prison. The decisions by judges must be entered by clerks, and there is currently a backlog of 12,000 files that grows by 200-300 per day.

 

The A.I. Threat

The first thing to mention is that this post is the current state of affairs in February 2017.

MIRI, the Machine Intelligence Research Institute does foundational mathematical research to ensure smarter-than-human artificial intelligence has a positive impact. In their recent newsletter, they pointed to two articles that give contrasting opinions.

The World Economic Forum’s 2017 Global Risks Report includes a discussion of AI safety: “given the possibility of an AGI working out how to improve itself into a superintelligence, it may be prudent – or even morally obligatory – to consider potentially feasible scenarios, and how serious or even existential threats may be avoided.”

In contrast, the JASON advisory group reports to the US Department of Defense that “the claimed ‘existential threats’ posed by AI seem at best uninformed,” adding, “In the midst of an AI revolution, there are no present signs of any corresponding revolution in AGI.”

One of the best books on the topic is Superintelligence by Nick Bostrom who confronts many different scenarios in which computer research may lead to a system capable of greater than human intelligence. He stakes a middle ground between the alarmists who foresee every danger of A.I. and those who assure us that it is only in science-fiction that those things happen. Bostrom’s rational approach is to ask: regardless of the timeline, will the eventual development of A.I. that surpasses human capacity lead to dangers? If so, how much danger and what ought we to do about it?

My position on artificial intelligence is that we have already met and failed at this problem, and we should acknowledge those failures in order to learn from them. We have not been destroyed by Terminators yet, so let me explain.

Nick Bostrom defines many ways in which a superintelligent computer system might work. The two obvious ones that spring to mind are Terminator-style humanoid robots with computer brains and the Skynet that created them: a self-contained software program that runs over several banks of servers or across a network. Today, we have systems like IBM’s Watson and Google’s DeepMind that solve problems within their own specialities, but very few androids who can even stand well.

However, this is not the end of the list. A collective superintelligence is an amalgamation of lesser intelligent systems. An army of ants displays a collective superintelligence (compared to any individual ant) also called “emergent intelligence”. Amazon and Netflix could not recommend products to you unless thousands of other shoppers were tracked. Much of machine learning is statistical analysis of the collective behaviour of groups because regardless of their individual thoughts and opinions, the wisdom is in the trend.

A computer system might be a collective superintelligence because each computer is not faster or have more data than a human, but when networked together they can outperform people. This is how Watson won Jeopardy!.

One of the reasons I joined the field of law is that I believe the legal system is one of the successful artificial intelligences we have created. It is superintelligent because the judgment of a single human could not outpace the efficiency of the justice system, and for questions of great import, we ask a panel of experienced judges to weigh in, and they often disagree.

Like the Chinese Room thought experiment, the legal system shouldn’t actually require people to run it. A well-written legal system should perform like mathematics, but for ethics. Given a set of laws and regulations, a person makes a request of the system along with their evidence. That data is analyzed according to the system’s current rules, and referring to precedents that define the parameters of jurisprudence. The output is the judgment of the court. We think it is a miscarriage of justice if the judge is making a personal decision, or if a juror is holding a grudge. We want objectivity and for the system to operate smoothly, enacting the laws as they were intended by our elected representatives, and using previous cases as our guide for consistency.

Legislators write code and the regulatory and judicial system run it.

The judicial branch is a part of our democratic government, and that level is where I believe we have made some serious errors in our design of superintelligence. The state is an artificial intelligence that provides infrastructure to society, but occasionally protects its interests by killing people. If we were to design a computer system, and said that about once a decade it will lead to the deaths of thousands of people when it believes it is threatened, would anyone allow the project to proceed?

To clarify: I am not advocating anarchism in this argument. On the contrary, I include myself in those who believe that the development of the governmental superintelligence that distributes healthcare and funds bridges and rescues people from natural disasters and all of those other benefits are worth the occasional war if it is fought for ethical reasons (See Just War Theory). Most of us agree that our great works are worth defending by force. Most of us are not extreme pacifists because we also agree that self-defence is also ethical. If I am attacked in the street, I am justified in using force even though it is the function of the police to protect me. In this way, we should look at the potential for harm of an Artificial Intelligence in the context of the bargain we have already made and happily so.

I say that we have created a superintelligence that has flaws in its design leading to the deaths of thousands and we morally accept that. How might we reimagine the benefits of the state but ensure that people won’t be killed by it? That might be an ethically superior system. That’s the design problem of the computer A.I. programmer.

Another experiment that we have run in artificial intelligence that contains moral failings is the corporation. A corporation is a legal entity, an agent, that has an internal organization and rules that are determined by its mandate and the laws of the country in which it operates. We think of it as a way for people to cooperate and work together for the benefit of the employees in their salaries, the good of the world that benefits from the corporation’s services, and the benefit of the shareholders who invested in the idea and draw profits.

However great the purposes of the corporation may be, the one that is given primacy is the shareholder’s interest in the financial return. That is where we went wrong. Employees are exploited because those who don’t like how they’re treated can leave and be replaced. The world may not benefit from the corporation’s existence as long as the corporation can somehow bamboozle, cajole or manipulate people outside the system into giving it their money. Even if the corporation creates something of value to some people, the world may be at a net loss if the operation of the corporation creates pollution, destruction or death overall. To cooperate is great. To create a product that could not exist without a team is great. To be employed and paid is great. However, none of those things are part of the fiduciary duty of the directors of your corporation. The duty is to deliver a profit to the shareholders. In the immortal line of Mother in Alien: “All other priorities are rescinded.”

How we approach the programming of a computer algorithm to prevent it from destroying humanity should be informed by our mistakes in programming other artificial intelligences. First, we must acknowledge the equivalence of collective intelligence operating within a ruleset (governments and corporations) as an artificial intelligence. Then, we must examine the outcomes from governments, corporations and any other examples we can identify and whether the rules of such entities could be improved for the good of humanity.

Or, you could take the Canadian legal system and use that to program your robot. In any case, it will take more than Three Laws.

Developmental Spiral

Based on Ray Kurzweil’s theories of accelerating technological change, the following table illustrates an approximate timeline:

Homo Sapiens 100,000 years
Tribal 40,000 years
Agricultural 7,000 years
Empires 2,500 years
Scientific 380 years (1500-1770)
Industrial 180 years (1770-1950)
Information 70 years (1950-2020)
Symbiotic 30 years (2020-2050)
Autonomy 10 years (2050-2060)

Relative growth rates in computer systems are remarkably stable:

  • memory outgrows processors
  • processors outgrow wired bandwidth
  • bandwidth outgrows wireless capabilities

So we expect:

  1. New storage tech first and fastest
  2. New processors and applications
  3. New wired communication hardware and protocols
  4. New wireless technology and improvements in transfer rates last and slowest