Can your lawyer code? Change is on the way and hybrid legal advisers are a must — Australaw.com.au

In this blog post by Ashley Kelso, a variety of new technologies are surveyed, along with their legal ramifications.

There has been a lot of talk about how ‘blockchain’, ‘machine learning’, and ‘Internet of Things’ (IoT) will ‘disrupt’ everything. How it will automate jobs, challenge existing business models, and overhaul the legal industry. But there haven’t been a lot of tangible examples of how this will occur. This article aims to provide a snapshot […]

via Can your lawyer code? Change is on the way and hybrid legal advisers are a must — Australaw.com.au

  • Automating Contract Administration: Singling out Ridley and Flux (a Google company), intelligent project-management software will not only optimize schedules and payments, but also the imposition of penalties for non-compliance. See also Kira, eBrevia, Thompson-Reuters Contract Express, Beagle.ai, Legal Robot, and more.
  • Automating Copyright Infringement Detection: Veredictum is used as an example of software that automatically detects unauthorized republication of videos. It uses a blockchain as its mechanism of registration. Other examples of automated copyright infringements detection include: YouTube’s Content ID systemVobile, Attributor, Audible Magic, and Gracenote (see this Wired.com article for a full discussion). The idea of using a blockchain to register artwork is discussed in this TechCrunch article, specifically concerning Verisart.
  • Property and Security Registries: Discussing Ethereum smart contracts
  • Self-Executing Contracts
  • Wills as Smart Contracts
  • Machine Learning as an Assistant

Kelso concludes that “Advisers need to be able to understand how these innovations work so that they can anticipate and deal with the issues that will arise, as established business models are threatened and legislators struggle to keep up.”

I couldn’t agree more.

Advertisement

Robot Lawyers and Judges

You may have heard about the machine learning tool that helped 160,000 parking tickets get dismissed:  AI lawyer shoots down 160,000 parking tickets

Legal Perspective

A seminar by Benjamin Alarie, Osler Chair in Business Law at the University of Toronto, was summarized inn Machines Regulating Humans: Will Algorithms Become Law? (slaw.ca, 13 February, 2017). Alarie began with the following video which illustrates the advances in technology and software over a short period of time:

The pace of technological evolution is accelerating, and although the current state of A.I. may seem impressive (computers winning at Jeopardy!Go and poker), he argues that it is only comparable to the 1976 versions of racing games in the video.

Alarie’s company, Blue J Legal, has achieved a 90% accuracy rate for fact-based dispute resolution using machine learning to predict outcomes. “These determinations are expensive and take a lot of time for humans to make but machine learning algorithms can consider the entire corpus of case law in minutes.”

The article generated a lot of questions, so F. Tim Knight posted some discussion points in a recent update.

To me, the most interesting issue brought up was worry about “normative retrenchment”, locking in the status quo, or intractable codification, however you phrase it. In other words, if an algorithm looks though the corpus of case law and makes a judgment in a case, it will likely continue to make the same judgement in similar cases because each decision (including its own) becomes a precedent. This is the nature of stare decisis (judges should follow precedent), but a judge can always render a decision based on their own analysis, creating new precedent. So far, judges have been human. When judges on the Supreme Court disagree, it is not because they were exposed to different case law or facts. It is because they disagree about the justice of the outcome and anticipating the precedent it will set. New judges are selected from lawyers, and each of them are humans brought up in slightly different cultural contexts and families. And they were raised in a society different from their parents because of the advance of technology at the heart of this discussion. An algorithm that only looks at facts and case law cannot weight some jurisprudence above others based on their life experience.

Knight answers these types of criticisms with an appeal to the potential sophistication of the software, and its current accuracy. If it already can deliver 90% accuracy, then either the judges are rendering verdicts like robots, or the algorithm is predicting the outcomes including human factors. And it will only get better at noticing nuances and situations that are borderline that will require deeper analysis and human judgement.

When evaluating whether an algorithm can decide important matters such as criminal charges, it is important not to hold it to a standard of perfection, because even human judges make mistakes. Some of those mistakes are very human tendencies of racial bias, gender discrimination, the economically conservative trend in the profession, and corruption by bribery or coercion. Some of these may be subtle effects that tip the scale without a sufficient grounds for appeal. An algorithm, despite perhaps making mistakes by not having a human understanding of motivation or other judgment faculties, would nevertheless reduce the human-type errors.

Technologist Perspective

kurzweilaiIn October, futurist Ray Kurzweil‘s site hosted an article entitled, “Will AI replace judges and lawyers?” (kurzweilai.net, 25 October, 2016).

The article mainly reports on a  University College London paper published in PeerJ Computer Science in which a machine learning algorithm had predicted the judicial decisions of the European Court of Human Rights (ECtHR) with 79% accuracy.

From 79% in October 2016 to 90% in February 2017 in fact-based decisions seems like a strong upward trajectory.

Accountability

techcrunchArtificial intelligence and the law  (techcrunch.com, 28 January 2017) contemplated the fact that machines that use reinforcement learning were not really “programmed” by their creators, and that might break the liability between the coder and the algorithm. If it is impossible for the programmer to foresee problems, then they not be found negligent in tort law.

The most interesting snippet from this article is buried at the bottom: In the U.K. the House of Commons Science and Technology Committee stated, “While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now.” The document also mentions the need for “accountability” when it comes to deployed AI and the associated consequences.

Technology Assisted Review

TAR or Technology Assisted Review is another form of machine learning that is currently deployed, and already lowering lawyers’ fees. An article on Quartz took a look at the possible consequences in Lawyers are being replaced by machines that read (qz.com, 25 January, 2017).

A machine learning algorithm can be custom-trained on a case-by-case basis by a few lawyers reading a small selection of possible evidence to decide its relevance.

rather than having many lawyers read a million documents, a few review a percentage of the possible evidence and predictive coding technology uses those answers to guide a computer review of the rest. This eliminates the need for all but a few lawyers to review evidence and assess it, then train machines, rather than lawyers with training eyeballing all the documents.

An industry is growing around TAR, even by a legal temp agency, Update Legal, that is now providing A.I. temps for electonic discovery.

Then again…

Ars Technica published an expose about legal software that has contributed to over two dozen rights violationsLawyers: New court software is so awful it’s getting people wrongly arrested (arstechnica.com, 2 December, 2016).

Apparently, in some parts of the United States, case management software is updated with court proceedings, and relied upon by law enforcement officers to coordinate arrests and releases and to issue court summons. Due to formatting errors, people are arrested on warrants that have been recalled and have wrongfully spent up to 20 days in prison. The decisions by judges must be entered by clerks, and there is currently a backlog of 12,000 files that grows by 200-300 per day.

 

The A.I. Threat

The first thing to mention is that this post is the current state of affairs in February 2017.

MIRI, the Machine Intelligence Research Institute does foundational mathematical research to ensure smarter-than-human artificial intelligence has a positive impact. In their recent newsletter, they pointed to two articles that give contrasting opinions.

The World Economic Forum’s 2017 Global Risks Report includes a discussion of AI safety: “given the possibility of an AGI working out how to improve itself into a superintelligence, it may be prudent – or even morally obligatory – to consider potentially feasible scenarios, and how serious or even existential threats may be avoided.”

In contrast, the JASON advisory group reports to the US Department of Defense that “the claimed ‘existential threats’ posed by AI seem at best uninformed,” adding, “In the midst of an AI revolution, there are no present signs of any corresponding revolution in AGI.”

One of the best books on the topic is Superintelligence by Nick Bostrom who confronts many different scenarios in which computer research may lead to a system capable of greater than human intelligence. He stakes a middle ground between the alarmists who foresee every danger of A.I. and those who assure us that it is only in science-fiction that those things happen. Bostrom’s rational approach is to ask: regardless of the timeline, will the eventual development of A.I. that surpasses human capacity lead to dangers? If so, how much danger and what ought we to do about it?

My position on artificial intelligence is that we have already met and failed at this problem, and we should acknowledge those failures in order to learn from them. We have not been destroyed by Terminators yet, so let me explain.

Nick Bostrom defines many ways in which a superintelligent computer system might work. The two obvious ones that spring to mind are Terminator-style humanoid robots with computer brains and the Skynet that created them: a self-contained software program that runs over several banks of servers or across a network. Today, we have systems like IBM’s Watson and Google’s DeepMind that solve problems within their own specialities, but very few androids who can even stand well.

However, this is not the end of the list. A collective superintelligence is an amalgamation of lesser intelligent systems. An army of ants displays a collective superintelligence (compared to any individual ant) also called “emergent intelligence”. Amazon and Netflix could not recommend products to you unless thousands of other shoppers were tracked. Much of machine learning is statistical analysis of the collective behaviour of groups because regardless of their individual thoughts and opinions, the wisdom is in the trend.

A computer system might be a collective superintelligence because each computer is not faster or have more data than a human, but when networked together they can outperform people. This is how Watson won Jeopardy!.

One of the reasons I joined the field of law is that I believe the legal system is one of the successful artificial intelligences we have created. It is superintelligent because the judgment of a single human could not outpace the efficiency of the justice system, and for questions of great import, we ask a panel of experienced judges to weigh in, and they often disagree.

Like the Chinese Room thought experiment, the legal system shouldn’t actually require people to run it. A well-written legal system should perform like mathematics, but for ethics. Given a set of laws and regulations, a person makes a request of the system along with their evidence. That data is analyzed according to the system’s current rules, and referring to precedents that define the parameters of jurisprudence. The output is the judgment of the court. We think it is a miscarriage of justice if the judge is making a personal decision, or if a juror is holding a grudge. We want objectivity and for the system to operate smoothly, enacting the laws as they were intended by our elected representatives, and using previous cases as our guide for consistency.

Legislators write code and the regulatory and judicial system run it.

The judicial branch is a part of our democratic government, and that level is where I believe we have made some serious errors in our design of superintelligence. The state is an artificial intelligence that provides infrastructure to society, but occasionally protects its interests by killing people. If we were to design a computer system, and said that about once a decade it will lead to the deaths of thousands of people when it believes it is threatened, would anyone allow the project to proceed?

To clarify: I am not advocating anarchism in this argument. On the contrary, I include myself in those who believe that the development of the governmental superintelligence that distributes healthcare and funds bridges and rescues people from natural disasters and all of those other benefits are worth the occasional war if it is fought for ethical reasons (See Just War Theory). Most of us agree that our great works are worth defending by force. Most of us are not extreme pacifists because we also agree that self-defence is also ethical. If I am attacked in the street, I am justified in using force even though it is the function of the police to protect me. In this way, we should look at the potential for harm of an Artificial Intelligence in the context of the bargain we have already made and happily so.

I say that we have created a superintelligence that has flaws in its design leading to the deaths of thousands and we morally accept that. How might we reimagine the benefits of the state but ensure that people won’t be killed by it? That might be an ethically superior system. That’s the design problem of the computer A.I. programmer.

Another experiment that we have run in artificial intelligence that contains moral failings is the corporation. A corporation is a legal entity, an agent, that has an internal organization and rules that are determined by its mandate and the laws of the country in which it operates. We think of it as a way for people to cooperate and work together for the benefit of the employees in their salaries, the good of the world that benefits from the corporation’s services, and the benefit of the shareholders who invested in the idea and draw profits.

However great the purposes of the corporation may be, the one that is given primacy is the shareholder’s interest in the financial return. That is where we went wrong. Employees are exploited because those who don’t like how they’re treated can leave and be replaced. The world may not benefit from the corporation’s existence as long as the corporation can somehow bamboozle, cajole or manipulate people outside the system into giving it their money. Even if the corporation creates something of value to some people, the world may be at a net loss if the operation of the corporation creates pollution, destruction or death overall. To cooperate is great. To create a product that could not exist without a team is great. To be employed and paid is great. However, none of those things are part of the fiduciary duty of the directors of your corporation. The duty is to deliver a profit to the shareholders. In the immortal line of Mother in Alien: “All other priorities are rescinded.”

How we approach the programming of a computer algorithm to prevent it from destroying humanity should be informed by our mistakes in programming other artificial intelligences. First, we must acknowledge the equivalence of collective intelligence operating within a ruleset (governments and corporations) as an artificial intelligence. Then, we must examine the outcomes from governments, corporations and any other examples we can identify and whether the rules of such entities could be improved for the good of humanity.

Or, you could take the Canadian legal system and use that to program your robot. In any case, it will take more than Three Laws.

Replacing Myself

This category will contain blog posts about the ways in which I am replacing myself, and humanity is replacing itself.

To some people, the major threat of Artificial Intelligence is the increasing number of jobs that automation can accomplish which displaces human workers. It is both a rational product of capitalism and a problem that cannot be solved by free markets. It is in the best interest of the corporation to replace workers with software and robotics if those systems come at a lower cost. However, all workers need to be employed in order to earn money to live. Therefore, the better and cheaper the automation, the more people will fall into poverty and starvation.

There is one simple solution that most of the world is too scared to attempt: the guaranteed minimum income, also known as the negative income tax or universal basic income. Everyone in society (or those who can show that their income is below a certain threshold), is given a monthly stipend from the government. I believe that opposition and fear of this proposal is based on an immediate reaction, and people don’t listen to the rational economic arguments that underpin and justify the balance sheet, the evidence gathered from experiments around the world showing that it can work, and those acclaimed economists (including staunch free-market capitalists like Milton Friedman) who argue for it.

Regardless of how we solve the seemingly inevitable economic problem of widespread technological unemployment, I look forward to a society in which all of the floors are vacuumed by Roombas, all banking is done electronically, all transportation is self-driving. Because in that world, we are safer, more efficient and have a lot more free time for creating art and enjoying life.

And I know I won’t live forever. I will be obsolete too. Therefore, I will also use this category to discuss my son as a replacement for myself.

Is Phishing Not Computer Fraud?

The word “hacking” has become a household term. “Hackers” are villains in the news media and when companies are “hacked” they lose public confidence. A hack is not a computer-specific word. Any workaround that does not use the accepted, standard or legal method is a hack. That’s why it used to be an insult to call an amateur a “hack”. They are not doing their job in the traditional way, but instead use shortcuts.

Only a minority of hacking activities are illegal. Many people implement “life hacks” such as tying two extension cords together to prevent them unplugging when pulled. I personally visit Lifehacker.com on a weekly basis. Hacks will either use an item in a way that was not intended, or make an item last longer than originally designed, or allow you to do a task faster than the traditional means.

That is why we call cybercriminals “hackers”: because they are not entering a computer system using a standard interface or with the proper authority. The common person thinks the hacker uses fancy computer programs to circumvent password protections or to disguise their location and identity because that is how it works in the movies.

Some hackers, however, enter those protected systems using the standard procedure because they obtained a password. One way to obtain a password or other information is what is called Phishing which is a kind of Social Engineering or Human Hacking.

Customer service reps and assistants are often targeted by hackers because they are trying to be helpful to hundreds of legitimate requests every day. Once in a while, a call (or email) comes in that looks like a person is in trouble and they just need a bit of information (such as that password, bank account number, or contact name) that would solve their problem. So the hacker gets the information they need without using any programming skills, just a good scam.

Imran Ahmad of Miller Thomson LLP analyzed the case of Apache Corp. v. Great American Insurance Company in an article on Mondaq.com entitled “Does Your Insurance Cover Phishing Scam? It May Not.

The 5th Circuit reversed the district court’s finding made in favor of Apache. It found that the loss was not the result of a “direct” use of a computer so as to be covered under the “computer-fraud” provision.

Mr. Ahmad makes the case that:

This case underscores the narrow judicial interpretation that may be afforded to crime policy “computer fraud” provisions which effectively constrains the computer-fraud coverage to “hacking” type events. From a Canadian perspective, the question is whether Canadian courts and insurance companies would similarly interpret “computer fraud” provisions of insurance policies if faced with a similar set of facts as in Apache.

Clearly, it is important for a business to have insurance against hacking and other breaches of cybersecurity. However, just because a fraudster uses email does not make it a case of computer fraud; it remains general fraud.

In related news, on January 19, 2017, the Canadian Securities Administrators (CSA) published Multilateral Staff Notice 51-347 — Disclosure of cyber security risks and incidents which was explained by Bradley J. Freedman and Joseph DiPonio in their article “Cyber Risk Management — Regulatory Guidance For Reporting Issuers’ Continuous Disclosure Of Cybersecurity Risks And Incidents” (Mondaq.com)

Under this regime, companies who issue shares to the public are expected to comply with continuous disclosure by issuing quarterly and annual reports, as well as prompt reporting of cybersecurity breaches by issuing press releases.

So, are they going to report phishing? The employee who accidentally leaked the information wouldn’t know they’d done something wrong until the information was used for theft or fraud, and sometimes not even then. We mainly know that phishing works because security experts have demonstrated it, not because any specific security breach could be shown to be due to a phishing scam.

This is how hackers hack you using simple social engineering

Hmm. I feel like I missed an opportunity for a pun about holding your breath under water or fishing because it’s mostly sitting in a boat waiting.

 

 

The Trolley Problem

Google’s self-driving cars have prompted questions about the Trolley Problem: How will an AI decide whether to save its passengers or avoid property damage, or will it be able to make a decision at all, freezing between the alternatives?

This question in philosophy is called the Trolly Problem. From Wikipedia:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

Well, how to humans deal with those problems? The whole point of this thought experiment is to explore the ethical dilemma and figure out how a person ought to decide. It’s a tough problem because different people have different answers. In other words, how one makes ethical decisions is problematic for humans, so why would we need an answer for an AI?

In Canadian Pacific Ltd. v. Gill, [1973] S.C.R. 654, at p. 665,  Mr. Justice Spence, for the court, said:
It is trite law that faced with a sudden emergency for the creation of which the driver is not responsible he cannot be held to a standard of conduct which one sitting in the calmness of a courtroom later might determine was the best course.

If your negligent driving (or bad weather conditions) cause death, you’ll be tried in court and may be convicted of manslaughter. In the case of property damage, you or your insurance will have to pay for it. The question of “how does one make that decision” is not even worth asking. We all know that in the split-second before an accident, there is hardly time for rational deliberation by a human, and reflexes predominate. The only real question for the law is: Who is liable?

Liability in self-driving cars may be one of the following:

  • The driver. Like cruise-control, the driver is still responsible for the car’s behaviour and the AI is “helping”. This is the most likely first-step in autonomous driving, and is already occurring with parking assist systems and collision-detection and other safety features.
  • The company who created the AI, whether that AI was sold with the car, or added later. Will a company take on that level of liability? They will if they’re sure of their product. The first company to take such a risk would be rapidly adopted, since drivers would gladly offload the risk and the cost of insurance. In a scenario where the AI driver is especially competent, accidents would rapidly decrease as adoption increased, leading to further pressure to avoid human interference in efficient and safe AI driving.
  • The car’s AI. In a scenario where laws can effectively punish an AI as if they were a person, the AI is considered the driver. This scenario is unlikely until a means of punishment can be devised that would be meaningful for an AI, which may be impossible.

This issue was raised on a recent episode of This Week in Law. There was some interesting discussion regarding the Trolley Problem and related issues. Sam Abuelsamid gave a ridiculous example of two cars approaching from opposite directions with different numbers of occupants on a one-lane road “where one car will have to run off the road”. Where in the world is this road, and why wouldn’t the cars slow down to a pace where they could avoid each other without colliding or killing passengers? Again, I reverse the question to “How would human drivers deal with this problem, and where would the liability lie”? Answer those questions, and you’ll answer the question of how a robot driver should address it.

It was interesting to hear that sophisticated sensors such as radar and lidar have problems in bad weather, too. My immediate reaction, though is: humans have a great problems driving in bad weather too. Drivers have only their eyes and brain to make judgements in a car. If your arguement that self-driving cars are dangerous because radar or camera is caked with snow, let’s examine whether you can drive if your windshield is caked with snow. That’s why we have windshield wipers and defrosters. This is not an insurmountable problem… it’s not even a long-term problem.

In fact, they discuss a few minutes later V2V or Vehicle-to-vehicle communication, in which a car could indicate to all other vehicles nearby where they are, what dangers they detect, and how to coordinate traffic. If that technology is widely implemented, then driverless cars would enhance their perception in ways that humans could never accomplish: near-instant collaboration and verification.

My bigger complaint is I always get frustrated by people like Ben Snitkoff who raised an argument I’ll paraphrase as “I like driving. I don’t want to live in a future where I can’t drive wherever I want.”

Ben, I’m sure that some people really liked riding horses into town. You can still ride horses on people’s private property, and you can even see professionals race horses, and you can take lessons. What you can’t do is ride a horse in a city or a highway. Ask any modern horse-lover whether they’d like to be on the road with cars, or if they think it’s worth it to drive out to the countryside to gallop in freedom. That’s where you’ll be in 20 years with your gear-shifting inefficient human driving. I’m sure you’ll love it. We’re not going to forego the millions of lives saved and the enormous efficiency gained for your enjoyment.

I’m confident we’ll figure out the liability issue. I don’t believe the sensor problem will cause more than a hiccup in the design process. And I think we’ll soon be taking driverless taxis just like I get on a driverless Skytrain here in Vancouver. Someone probably said human train conductors would always be necessary…

Developmental Spiral

Based on Ray Kurzweil’s theories of accelerating technological change, the following table illustrates an approximate timeline:

Homo Sapiens 100,000 years
Tribal 40,000 years
Agricultural 7,000 years
Empires 2,500 years
Scientific 380 years (1500-1770)
Industrial 180 years (1770-1950)
Information 70 years (1950-2020)
Symbiotic 30 years (2020-2050)
Autonomy 10 years (2050-2060)

Relative growth rates in computer systems are remarkably stable:

  • memory outgrows processors
  • processors outgrow wired bandwidth
  • bandwidth outgrows wireless capabilities

So we expect:

  1. New storage tech first and fastest
  2. New processors and applications
  3. New wired communication hardware and protocols
  4. New wireless technology and improvements in transfer rates last and slowest