You may have heard about the machine learning tool that helped 160,000 parking tickets get dismissed: AI lawyer shoots down 160,000 parking tickets
Legal Perspective
A seminar by Benjamin Alarie, Osler Chair in Business Law at the University of Toronto, was summarized inn Machines Regulating Humans: Will Algorithms Become Law? (slaw.ca, 13 February, 2017). Alarie began with the following video which illustrates the advances in technology and software over a short period of time:
The pace of technological evolution is accelerating, and although the current state of A.I. may seem impressive (computers winning at Jeopardy!, Go and poker), he argues that it is only comparable to the 1976 versions of racing games in the video.
Alarie’s company, Blue J Legal, has achieved a 90% accuracy rate for fact-based dispute resolution using machine learning to predict outcomes. “These determinations are expensive and take a lot of time for humans to make but machine learning algorithms can consider the entire corpus of case law in minutes.”
The article generated a lot of questions, so F. Tim Knight posted some discussion points in a recent update.
To me, the most interesting issue brought up was worry about “normative retrenchment”, locking in the status quo, or intractable codification, however you phrase it. In other words, if an algorithm looks though the corpus of case law and makes a judgment in a case, it will likely continue to make the same judgement in similar cases because each decision (including its own) becomes a precedent. This is the nature of stare decisis (judges should follow precedent), but a judge can always render a decision based on their own analysis, creating new precedent. So far, judges have been human. When judges on the Supreme Court disagree, it is not because they were exposed to different case law or facts. It is because they disagree about the justice of the outcome and anticipating the precedent it will set. New judges are selected from lawyers, and each of them are humans brought up in slightly different cultural contexts and families. And they were raised in a society different from their parents because of the advance of technology at the heart of this discussion. An algorithm that only looks at facts and case law cannot weight some jurisprudence above others based on their life experience.
Knight answers these types of criticisms with an appeal to the potential sophistication of the software, and its current accuracy. If it already can deliver 90% accuracy, then either the judges are rendering verdicts like robots, or the algorithm is predicting the outcomes including human factors. And it will only get better at noticing nuances and situations that are borderline that will require deeper analysis and human judgement.
When evaluating whether an algorithm can decide important matters such as criminal charges, it is important not to hold it to a standard of perfection, because even human judges make mistakes. Some of those mistakes are very human tendencies of racial bias, gender discrimination, the economically conservative trend in the profession, and corruption by bribery or coercion. Some of these may be subtle effects that tip the scale without a sufficient grounds for appeal. An algorithm, despite perhaps making mistakes by not having a human understanding of motivation or other judgment faculties, would nevertheless reduce the human-type errors.
Technologist Perspective
In October, futurist Ray Kurzweil‘s site hosted an article entitled, “Will AI replace judges and lawyers?” (kurzweilai.net, 25 October, 2016).
The article mainly reports on a University College London paper published in PeerJ Computer Science in which a machine learning algorithm had predicted the judicial decisions of the European Court of Human Rights (ECtHR) with 79% accuracy.
From 79% in October 2016 to 90% in February 2017 in fact-based decisions seems like a strong upward trajectory.
Accountability
Artificial intelligence and the law (techcrunch.com, 28 January 2017) contemplated the fact that machines that use reinforcement learning were not really “programmed” by their creators, and that might break the liability between the coder and the algorithm. If it is impossible for the programmer to foresee problems, then they not be found negligent in tort law.
The most interesting snippet from this article is buried at the bottom: In the U.K. the House of Commons Science and Technology Committee stated, “While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now.” The document also mentions the need for “accountability” when it comes to deployed AI and the associated consequences.
Technology Assisted Review
TAR or Technology Assisted Review is another form of machine learning that is currently deployed, and already lowering lawyers’ fees. An article on Quartz took a look at the possible consequences in Lawyers are being replaced by machines that read (qz.com, 25 January, 2017).
A machine learning algorithm can be custom-trained on a case-by-case basis by a few lawyers reading a small selection of possible evidence to decide its relevance.
rather than having many lawyers read a million documents, a few review a percentage of the possible evidence and predictive coding technology uses those answers to guide a computer review of the rest. This eliminates the need for all but a few lawyers to review evidence and assess it, then train machines, rather than lawyers with training eyeballing all the documents.
An industry is growing around TAR, even by a legal temp agency, Update Legal, that is now providing A.I. temps for electonic discovery.
Then again…
Ars Technica published an expose about legal software that has contributed to over two dozen rights violations: Lawyers: New court software is so awful it’s getting people wrongly arrested (arstechnica.com, 2 December, 2016).
Apparently, in some parts of the United States, case management software is updated with court proceedings, and relied upon by law enforcement officers to coordinate arrests and releases and to issue court summons. Due to formatting errors, people are arrested on warrants that have been recalled and have wrongfully spent up to 20 days in prison. The decisions by judges must be entered by clerks, and there is currently a backlog of 12,000 files that grows by 200-300 per day.