Can your lawyer code? Change is on the way and hybrid legal advisers are a must — Australaw.com.au

In this blog post by Ashley Kelso, a variety of new technologies are surveyed, along with their legal ramifications.

There has been a lot of talk about how ‘blockchain’, ‘machine learning’, and ‘Internet of Things’ (IoT) will ‘disrupt’ everything. How it will automate jobs, challenge existing business models, and overhaul the legal industry. But there haven’t been a lot of tangible examples of how this will occur. This article aims to provide a snapshot […]

via Can your lawyer code? Change is on the way and hybrid legal advisers are a must — Australaw.com.au

  • Automating Contract Administration: Singling out Ridley and Flux (a Google company), intelligent project-management software will not only optimize schedules and payments, but also the imposition of penalties for non-compliance. See also Kira, eBrevia, Thompson-Reuters Contract Express, Beagle.ai, Legal Robot, and more.
  • Automating Copyright Infringement Detection: Veredictum is used as an example of software that automatically detects unauthorized republication of videos. It uses a blockchain as its mechanism of registration. Other examples of automated copyright infringements detection include: YouTube’s Content ID systemVobile, Attributor, Audible Magic, and Gracenote (see this Wired.com article for a full discussion). The idea of using a blockchain to register artwork is discussed in this TechCrunch article, specifically concerning Verisart.
  • Property and Security Registries: Discussing Ethereum smart contracts
  • Self-Executing Contracts
  • Wills as Smart Contracts
  • Machine Learning as an Assistant

Kelso concludes that “Advisers need to be able to understand how these innovations work so that they can anticipate and deal with the issues that will arise, as established business models are threatened and legislators struggle to keep up.”

I couldn’t agree more.

Advertisement

Is Phishing Not Computer Fraud?

The word “hacking” has become a household term. “Hackers” are villains in the news media and when companies are “hacked” they lose public confidence. A hack is not a computer-specific word. Any workaround that does not use the accepted, standard or legal method is a hack. That’s why it used to be an insult to call an amateur a “hack”. They are not doing their job in the traditional way, but instead use shortcuts.

Only a minority of hacking activities are illegal. Many people implement “life hacks” such as tying two extension cords together to prevent them unplugging when pulled. I personally visit Lifehacker.com on a weekly basis. Hacks will either use an item in a way that was not intended, or make an item last longer than originally designed, or allow you to do a task faster than the traditional means.

That is why we call cybercriminals “hackers”: because they are not entering a computer system using a standard interface or with the proper authority. The common person thinks the hacker uses fancy computer programs to circumvent password protections or to disguise their location and identity because that is how it works in the movies.

Some hackers, however, enter those protected systems using the standard procedure because they obtained a password. One way to obtain a password or other information is what is called Phishing which is a kind of Social Engineering or Human Hacking.

Customer service reps and assistants are often targeted by hackers because they are trying to be helpful to hundreds of legitimate requests every day. Once in a while, a call (or email) comes in that looks like a person is in trouble and they just need a bit of information (such as that password, bank account number, or contact name) that would solve their problem. So the hacker gets the information they need without using any programming skills, just a good scam.

Imran Ahmad of Miller Thomson LLP analyzed the case of Apache Corp. v. Great American Insurance Company in an article on Mondaq.com entitled “Does Your Insurance Cover Phishing Scam? It May Not.

The 5th Circuit reversed the district court’s finding made in favor of Apache. It found that the loss was not the result of a “direct” use of a computer so as to be covered under the “computer-fraud” provision.

Mr. Ahmad makes the case that:

This case underscores the narrow judicial interpretation that may be afforded to crime policy “computer fraud” provisions which effectively constrains the computer-fraud coverage to “hacking” type events. From a Canadian perspective, the question is whether Canadian courts and insurance companies would similarly interpret “computer fraud” provisions of insurance policies if faced with a similar set of facts as in Apache.

Clearly, it is important for a business to have insurance against hacking and other breaches of cybersecurity. However, just because a fraudster uses email does not make it a case of computer fraud; it remains general fraud.

In related news, on January 19, 2017, the Canadian Securities Administrators (CSA) published Multilateral Staff Notice 51-347 — Disclosure of cyber security risks and incidents which was explained by Bradley J. Freedman and Joseph DiPonio in their article “Cyber Risk Management — Regulatory Guidance For Reporting Issuers’ Continuous Disclosure Of Cybersecurity Risks And Incidents” (Mondaq.com)

Under this regime, companies who issue shares to the public are expected to comply with continuous disclosure by issuing quarterly and annual reports, as well as prompt reporting of cybersecurity breaches by issuing press releases.

So, are they going to report phishing? The employee who accidentally leaked the information wouldn’t know they’d done something wrong until the information was used for theft or fraud, and sometimes not even then. We mainly know that phishing works because security experts have demonstrated it, not because any specific security breach could be shown to be due to a phishing scam.

This is how hackers hack you using simple social engineering

Hmm. I feel like I missed an opportunity for a pun about holding your breath under water or fishing because it’s mostly sitting in a boat waiting.

 

 

The Trolley Problem

Google’s self-driving cars have prompted questions about the Trolley Problem: How will an AI decide whether to save its passengers or avoid property damage, or will it be able to make a decision at all, freezing between the alternatives?

This question in philosophy is called the Trolly Problem. From Wikipedia:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

Well, how to humans deal with those problems? The whole point of this thought experiment is to explore the ethical dilemma and figure out how a person ought to decide. It’s a tough problem because different people have different answers. In other words, how one makes ethical decisions is problematic for humans, so why would we need an answer for an AI?

In Canadian Pacific Ltd. v. Gill, [1973] S.C.R. 654, at p. 665,  Mr. Justice Spence, for the court, said:
It is trite law that faced with a sudden emergency for the creation of which the driver is not responsible he cannot be held to a standard of conduct which one sitting in the calmness of a courtroom later might determine was the best course.

If your negligent driving (or bad weather conditions) cause death, you’ll be tried in court and may be convicted of manslaughter. In the case of property damage, you or your insurance will have to pay for it. The question of “how does one make that decision” is not even worth asking. We all know that in the split-second before an accident, there is hardly time for rational deliberation by a human, and reflexes predominate. The only real question for the law is: Who is liable?

Liability in self-driving cars may be one of the following:

  • The driver. Like cruise-control, the driver is still responsible for the car’s behaviour and the AI is “helping”. This is the most likely first-step in autonomous driving, and is already occurring with parking assist systems and collision-detection and other safety features.
  • The company who created the AI, whether that AI was sold with the car, or added later. Will a company take on that level of liability? They will if they’re sure of their product. The first company to take such a risk would be rapidly adopted, since drivers would gladly offload the risk and the cost of insurance. In a scenario where the AI driver is especially competent, accidents would rapidly decrease as adoption increased, leading to further pressure to avoid human interference in efficient and safe AI driving.
  • The car’s AI. In a scenario where laws can effectively punish an AI as if they were a person, the AI is considered the driver. This scenario is unlikely until a means of punishment can be devised that would be meaningful for an AI, which may be impossible.

This issue was raised on a recent episode of This Week in Law. There was some interesting discussion regarding the Trolley Problem and related issues. Sam Abuelsamid gave a ridiculous example of two cars approaching from opposite directions with different numbers of occupants on a one-lane road “where one car will have to run off the road”. Where in the world is this road, and why wouldn’t the cars slow down to a pace where they could avoid each other without colliding or killing passengers? Again, I reverse the question to “How would human drivers deal with this problem, and where would the liability lie”? Answer those questions, and you’ll answer the question of how a robot driver should address it.

It was interesting to hear that sophisticated sensors such as radar and lidar have problems in bad weather, too. My immediate reaction, though is: humans have a great problems driving in bad weather too. Drivers have only their eyes and brain to make judgements in a car. If your arguement that self-driving cars are dangerous because radar or camera is caked with snow, let’s examine whether you can drive if your windshield is caked with snow. That’s why we have windshield wipers and defrosters. This is not an insurmountable problem… it’s not even a long-term problem.

In fact, they discuss a few minutes later V2V or Vehicle-to-vehicle communication, in which a car could indicate to all other vehicles nearby where they are, what dangers they detect, and how to coordinate traffic. If that technology is widely implemented, then driverless cars would enhance their perception in ways that humans could never accomplish: near-instant collaboration and verification.

My bigger complaint is I always get frustrated by people like Ben Snitkoff who raised an argument I’ll paraphrase as “I like driving. I don’t want to live in a future where I can’t drive wherever I want.”

Ben, I’m sure that some people really liked riding horses into town. You can still ride horses on people’s private property, and you can even see professionals race horses, and you can take lessons. What you can’t do is ride a horse in a city or a highway. Ask any modern horse-lover whether they’d like to be on the road with cars, or if they think it’s worth it to drive out to the countryside to gallop in freedom. That’s where you’ll be in 20 years with your gear-shifting inefficient human driving. I’m sure you’ll love it. We’re not going to forego the millions of lives saved and the enormous efficiency gained for your enjoyment.

I’m confident we’ll figure out the liability issue. I don’t believe the sensor problem will cause more than a hiccup in the design process. And I think we’ll soon be taking driverless taxis just like I get on a driverless Skytrain here in Vancouver. Someone probably said human train conductors would always be necessary…