17

I'm confused about why people claim that current legal system cannot handle any wrongdoings of algorithms that involve artificial intelligence. The claim is that it is impossible to find who is liable for the wrongdoing. This claim seems strange: isn't it obvious that it's the company who developed the algorithm that is liable for any issues that this algorithm caused?

Can someone explain where the current legal system/framework/laws break down when it comes to any harm caused by artificial intelligence?

Laurel
  • 705
  • 7
  • 19
Qwerty
  • 195
  • 1
  • 3

11 Answers11

42

Real-world situations are rarely so clear-cut

Let's say, hypothetically, that I'm in the driver's seat of a car. The company told me that the car has "Full Self Driving" capabilities based on some sort of artificial intelligence, though they also said that these capabilities "are intended for use with a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment." Let's say I was not fully attentive at a moment when the car's AI decides to swerve into oncoming traffic, and I fail to grab the wheel and prevent that.

Who's at fault? Is it the car company's fault for a bug that caused that? Is it my fault for failing to be fully attentive? Is it some combination of the two?

But wait, it can get more complicated: maybe the car company argues that they couldn't have reasonably anticipated the situation that caused it: maybe the lines were incorrectly drawn on the road, and indicated that the road continued in that direction. Maybe I argue that the car swerved quickly enough that even a fully attentive driver couldn't have recovered.

These and more are all facts that need to be sorted out in a trial. There's no way to simply say that "any issues that this algorithm caused" are entirely the company's fault.

In other words, this isn't really the legal system "breaking down"—it's working as intended, trying to figure out whose fault an event actually was. The law just isn't very developed yet as to the process a court would follow to assign liability.

Ryan M
  • 10,374
  • 2
  • 47
  • 63
20

Error is not always Wrongdoing

The OP writes of "wrongdoings of algorithms". To me a "wrongdoing" is something that would be criminal, or at least involve civil liability. But not every time that something goes wrong is there any "wrongdoing" in this sense. Sometimes a bad outcome is simply an accident, and no one is liable, civilly or criminally.

That said, no algorithm today is, to the best of my understanding, anywhere near the point where we can speak of the wrongdoing of an algorithm. Algorithms make errors, people do wrong. If there is liability when an error results in damage, in may be the responsibility of the maker of the algorithm, or of some individual who worked on the algorithm, or of the user who was running the algorithm, or perhaps of some other person who was in some way involved. Determining which is one thing that the legal system must do, and it isn't always easy.

In many ways this is simply the problem of liability for the failure of a manufactured product, and is no different just because an algorithm or an AI is involved, although the situation may be more complex.

The OP writes in the question:

isn't it obvious that it's the company who developed the algorithm that is liable for any issues that this algorithm caused?

The law could take that approach, but in many cases it would work injustice. So it doesn't take that approach, at least not in the US or the UK, and I don't think it does in any current jurisdiction.

Let's consider a simple case, with a manufactured product but no algorithm at all. A carpenter is using a hammer to nail boards to studs in the framing of a house. S/he lifts the hammer back, and the head comes loose, flies away, and hits another worker in the head, injuring or killing that worker. Is the manufacturer of the hammer liable?

Possibly. If the making of the hammer used inferior parts or techniques, was not up to normal professional standards, and such a failure was reasonably foreseeable, then quite possibly the answer is "Yes". If the hammer was well-made and the failure was an unpredictable accident, then "No".

Was the carpenter liable? If s/he used an improper tool, perhaps using a light tack-hammer where a much heavier one was called for, stressing it so that failure was foreseeable, then perhaps "Yes". If the carpenter acted as a reasonable and skilled person would, then probably "No". In both cases foreseeability, and working to a reasonable standard of care, are key aspects for whether liability is imposed.

Possibly neither the manufacturer nor the carpenter is liable. The accident could be ruled exactly that, an accident with no liability from anyone.

Now let us take the case of the self-driving car. The car's AI makes an error, failing to curve when the road curves, driving into oncoming traffic, causing a crash and injuries. Is the company that made the car (or the subcontractor that wrote the software) liable? It will depend on the detailed facts.

Having a road swerve is a very forseeable situation, so the designers should have included handling it in the design, and should have tested such situations on a number of simulated and actual roads. The quality of both design and testing efforts would be evaluated in detail in assessing whether there is liability here. If the specific cause of the error can be found, that will help. If the cause was a misinterpreted or incorrect road marking, it will be a question if such markings are foreseeable, as they probably are. If a human driver was supposed to be monitoring and taking control in the case of an a=error, that driver might have partial liability.

But the law does not simply throw up its hands and say there is no way to determine cause or liability. It will attempt to apply the same general principles that it does to possible liability for accidents involving a hammer, a train, or any other manufactured product. The details will differ with the jurisdiction, and the specific facts, but whether the accident was reasonably foreseeable, and the degree of care used by the manufacturer will usually be important.

Bryan
  • 3
  • 2
David Siegel
  • 115,406
  • 10
  • 215
  • 408
8

Any system that might endanger people must be reasonably safe. In the UK the likelihood of an accident must be "As Low As Reasonably Possible" (ALARP). In the event of an accident it is for the manufacturer and/or the operator to show that this was the case, otherwise they will be liable.

In practice "ALARP" is too vague a standard, so various industries have established standards which are more detailed. It is generally considered that if these standards are followed then the risk due to the system is considered ALARP. For automotive electronics that standard is ISO 26262. Other industries have similar ones. Most of these are descendants of IEC 61508. Following a standard like this is not a complete get-out-of-jail-free card, but its not far off.

The basic concept behind these standards is to start with a systematic enquiry into the question "what could possibly go wrong?". For a system that controls a car one of those things would be "car is steered into opposing traffic". A process called "HAZOP" is used to create a list of these things, and to rate them by severity. So "unnecessary emergency braking" would have a lower severity than "car is steered into opposing traffic" because the former is much less likely to kill someone (though its obviously not impossible). A diligent HAZOP should identify all the foreseeable ways in which an accident might occur, and hence could be used as a defence to show that something outside the HAZOP was not reasonably foreseeable.

Once the hazard list is identified the system must be designed to manage these hazards. As part of the system design process the designers will consider how each component might fail and how that would affect the system as a whole. For instance, if the wiper motor fails, what happens to the car? The system designers have a number of options, including redundant systems, fallback systems with less capability but more reliability, and of course human intervention.

Human intervention tends to be the difficult one. Its very tempting to simply hand off ultimate authority to the operator and then declare that the responsibility for accidents therefore lies with the operator. It is also not going to work. A system consists of people, processes and technology, not just the technology. Any safe system design must include the failure modes of the human operators just as much as the sensors, actuators and other technical components. Humans are known to be bad at paying attention to routine matters that don't require interaction, so a system design which assumes a permanently alert operator waiting to override an error is not going to be safe, and its manufacturer isn't going to be able to escape liability merely by pointing to an inattentive operator.

AI doesn't change anything fundamental about this. If you want to put an AI in charge of a car you need to consider its failure modes and the ways in which they might lead to an accident, just like you would for any other component in the system.

Conventionally the implementation of safety systems is based on detailed requirements (which are themselves analysed against the hazard list for safety) followed by careful implementation and testing to ensure that the resulting system meets these requirements. (I'm skipping lots of irrelevant detail here). AIs that use trained neural networks don't have the traceability from detailed requirement to implementation, so safety assurance is much more of a headache. Right now we don't have any standards for this kind of work. So when an AI kills someone by mistake we don't have a proper framework for judging liability. Were the risks ALARP or not? Ultimately it would be for a jury to decide by looking at the evidence.

So to answer the question, no it is not impossible to determine who is responsible when an AI causes an accident, but until we have enough experience to write the appropriate standards it's going to be a toss-up. A lay jury is not going to have the expertise and knowledge to judge whether an AI was safe enough, so a legal case is likely to descend into a battle of the experts.

Because of the uncertainty this creates there are some specific legal frameworks designed to manage the liability for such systems.

One of the issues with any safety problem is that rules like ALARP can make the best the enemy of the good. Suppose you have a situation with 10 accidents per year. You introduce a new safety system, and now there are only 3 accidents per year. However 2 of those accidents are directly caused by the new system. Is the system safe? AI autopilots for cars might well be safer than the human drivers. In this case it seems a little unreasonable to hand liability for these systems to the manufacturers instead of the driver and their insurers, where it currently lies.

Paul Johnson
  • 14,252
  • 3
  • 39
  • 63
7

The involvement of AI is irrelevant to liability; what actually happened is.

Let's say a company uses an AI during their hiring process to eliminate potential employees who are not suitable. And it takes a year until someone figures out that the AI rejects systematically all candidates with black skin colour or names that seem to indicate black heritage. That's illegal discrimination, and the company is 100% liable. "Our AI got it wrong" is no excuse.

About intent: The intent in this case seems to be to use what the AI says as a base of your decisions. So if the decisions of the AI are racist discrimination, then that's on you.

gnasher729
  • 35,915
  • 2
  • 51
  • 94
5

No Robot Law needed

I am going to stick my head out here and say that (whilst not wanting to minimise the dangers and difficulties of technology) I think that the terms in which this concern is often voiced mainly reflects that fact that most legal commentators have little scientific understanding and are thinking in terms of the androids depicted in science fiction which are portrayed as conscious moral beings.

The more algorithms are used to make decisions the more complicated it may be to assign legal responsibility but, as others have pointed out, the law does that on a case by case basis according to existing legal principles such as the tort of negligence considering the evidence which the parties bring to the court.

There is no immediate likelihood that the absence of so-called "robot law" will result in the human race being enslaved by HAL in league with R2D2 and 3PO.

Nemo
  • 1,577
  • 8
  • 22
1

The legal system may "break down" by evaluating AI in inappropriate ways.

The whole idea of AI is to design computers that would react in ways not fully predictable or controllable by the programmers.

Of course we're still far from being at a point where an AI can be considered a moral agent independent of their creator (if this ever happens), and the creator can predict and control its behaviour to a large degree by training and evaluating it on certain example data.

So really the legal system shouldn't be asking:

  • Would a competent and moral human reasonably have performed the same wrongdoing? nor
  • Was this due to an error or omission in the AI design?

What one should be asking is:

  • Does the AI perform better than a human would in general? and
  • If there was an error or omission in the AI design that caused the wrongdoing, was that due to clear negligence or malice?

The former puts an unreasonable burden on the AI creator and severely stifles the progress of AI. The latter holds people to a standard of responsibly designing AI that is an improvement to the way we currently do things. We don't want a standard of "perfect or nothing", as that would pretty much prevent the use of AI altogether (just like we don't want to require that medicine has no side effects, as that would pretty much prevent the use of medicine altogether).

I'm not entirely up to date on lawsuits involving AI, but I wouldn't say "our current legal system breaks down" there. At worst you'd need to get a higher court to rule that AI should be evaluated in the above way. Responsibly-designed AI systems are already compared to whatever the existing process is during the design process (if possible), meaning this data would be available, and lawsuits about something like medicine would already have a somewhat similar precedence where only one individual suffering any documented or unknown side effect would generally not lead to a successful lawsuit against a medicine manufacturer (negligence or malice would typically be required for that). Although there are rules and laws around medicine that require a certain amount of transparency, trials and approval by independent agencies, which is perhaps something that should happen with AI as well (at least if we're talking about life-or-death situations).

NotThatGuy
  • 367
  • 2
  • 7
1

It does

Your assumptions are fully wrong. Legal systems do handle the liability of the operator of the machine, making him fully liable for the damage done to the other parties by the machine.

The fact that the machine can work autonomously doesn't limit your liability, under circumstances it can increase it. For example, you're not only fully liable for the damage your dog has done to the other party, but you can also face criminal charges for not having enough control over it's actions.

0

I noticed a lot of confusion in the comments of one of the leading answers regarding what can be expected. My intended comment quickly ballooned into something that was more like an answer, even though it is not directly law.

In software development, we have the idea of a V&V cycle: verification and validation. In very rolled up terms, "verification" is proving that the software works according to some specification, and "validation" is proving that the specification solves a problem. The former is a very procedural process, while the latter is notoriously fluid.

If a company failed their own verification tests, it is easy to pin all of the fault on the company. There was a straight-forward procedure, and they didn't follow it enough. However, validation is trickier. In the situation of a self driving car, 0% of all drivers are considering the spec for the car. That's not a "round down to 0" that's "none of them." Heck, they may not even legally be able to get their hands on the spec. The actual legal concept of responsibility for an accident would come down to how well the driver understood what the car was validated against, and how well they could be expected to understand it.

If I grab a fork from my silverware drawer, it doesn't come with a "WARNING: Do not stick repeatedly in eye" sticker. The society around these forks expects that I understand the consequences of such an action well enough that the fault is placed squarely on me.

A can of engine starting fluid (Diethyl ether... nasty stuff) comes with a warning not to incinerate the can. I think society generally understands that throwing a can of engine starter in the fire is a bad idea, but we find the legal need to put a warning on the can, just to say "we told you!"

My new car comes with an owner's manual full of warnings. I read them all, but it would be a major challenge to remember all of them simultaneously in the event of an accident. I am certain some set of lawyers identified what list of warnings they thought needed to presented in a prominent manner and which could just go in the manual.

A taxi cab comes with a bunch of warnings written in an eyebleed font, but really most of this is handled with the company/driver assuming all liability for getting you there safely. Even then, there are unhandled cases. It's on a passenger to know not to suddenly distract a driver in a key moment by putting their hands over the driver's eyes.

The challenge with self driving cars is that the capability is a moving target. What understanding is expected of the user is changing. Our usual approach of putting the warnings in the right places is changing. And that makes any of these things tricky.

Ryan M
  • 10,374
  • 2
  • 47
  • 63
Cort Ammon
  • 321
  • 1
  • 7
0

The legal system can handle it. The problem is the liability. In particular, its quite likely that liability would shift to car makers. Currently if you cause an accident, Ford or Toyota have essentially no involvement (there's the rare case of manufacturing defects but those are extremely uncommon and usually won't even be considered unless there's a sudden string of similar, inexplicable crashes).

If the AI is driving on the other hand - especially as we approach L5/L6 (ie: fully self-driving with no human intervention or even attentiveness required) - it will get harder and harder to place liability on the vehicle's owner. But of course car manufacturers don't want that (and really, neither do you - the cost of new cars will increase to offset the liability risk). The problem the legal system "can't handle" is nobody willing to take responsibility.

It gets more complicated though. If you were riding in a brand new AI-driven car and got in an accident, not a problem. But what happens in 10 years when your car is getting on and you haven't bothered properly maintaining it? How bald do your tires have to get before liability shifts to you? The "obvious" answer is that car makers should just have their vehicles monitor their condition and provide increasing warnings to the owners (and finally self-disable until repaired) but locking somebody out of a thing they paid tens of thousands of dollars for is also a bit of a legal quagmire, EULA or not.

Others have pointed out that automated systems have been in use (and have caused harm) for decades at this point, but self-driving cars are a bit different: They're intended to be marketed to average consumers. Industrial machinery in a factory is quite a different legal story and usually comes with contracts and other documentation that firmly establishes liability boundaries, maintenance regimes, etc. That's not something you can reasonably expect the average Jim or Karen to comprehend even if a pushy car salesmen gets them to sign a 14 page document stating that they do. It would be nice if everyone had the level of legal competence and patience to comprehend such things but few of us do.

Erin
  • 1
-1

"isn't it obvious that it's the company who developed the algorithm that is liable for any issues that this algorithm caused"

It appears that you are assuming that AI algorithms are actually programmed by a person working for the company. What most people call AI is based on "machine learning". The defining feature of machine learning is that it is trained rather than programmed by a person.

Most of the time companies don't develop the machine learning system from scratch. Instead they use a generic third party library and then train it by exposing it to test data. If the resulting model passes the tests well enough for all the test data and real world tests then it gets deployed into a product, if not they train it more.

The developers of the third party generic AI library probably can't be found at fault if they had no involvement in what third parties would use it for.

As for the company deploying the AI system...
Practically speaking its impossible to test a system for all possible combinations of inputs. Supposing I make 100 systems and run them each for 1 year in their intended environment with no failures, I could say that I had 100 years worth of test data. But then the system gets deployed and there is a failure. Who would fault a manufacturer for a failure at that point? Should they have ran 1000 years, or a million years of test data? At some point they have to stop and declare it safe.

Also, AI is getting better all the time. It seems likely that not too far in the future we may have generalized AI that can do almost anything a person can. At that point trying to predict what that system would do is going to be about as difficult as predicting what other humans would do.

At least in the United States, parents are not usually held liable for the actions of their adult offspring, and it would seem that a similar principle would apply to the relationship between an AI development company and a generalized AI product (assuming it was adequately tested before release).

user4574
  • 303
  • 1
  • 6
-2

I just published an article on this topic. The answer is not the law. The issue is devising a mandatory insurance scheme whenever AI is used. That would take care of accidental AI infringement same as car insurance takes care of accidental car accidents. Insurance is the issue not the law.