Blog post: Autonomous driving, accidents and fatalities…where does responsibility lie?

News | Posted on Friday 8 March 2019

The recent news that Uber isn’t criminally liable for the fatal crash in Tempe raises interesting questions. Read our blog post of two halves – ethical and legal.

In March 2018 an Uber Technologies Inc autonomous car (a modified Volvo XC90) in computer control mode struck and killed a pedestrian in Tempe, Arizona. A preliminary investigation by the US National Transportation Safety Board (NTSB) indicated that the systems on the car had detected the pedestrian, who was wheeling a bicycle, but had not determined the need to take avoiding action until 1.3 seconds before impact, although it first detected the pedestrian and bicycle about 6 seconds before impact. The car’s systems did not initiate emergency braking, and the operator of the vehicle did not apply the brakes until after the impact. Videos from the car suggest that the operator was not focused on the task of monitoring for obstacles in order to avoid accidents.

In March 2019, reports say that Uber would not face criminal liability for the accident. The notion of liability is complex and varies between jurisdictions. However, in general terms, criminal liability would arise where there has been negligence -  in this case a negligent act by Uber. There is also the notion of civil liability where a person (which might be a company identified as a legal individual) does not exercise due diligence.

It might be that the vehicle operator is prosecuted for negligence  -  the press report does not discuss this — but if not, where does liability lie? Indeed, does it lie anywhere?

Below we provide two complementary views on the issues raised  -  the first focusing on ethical issues and how these might influence legislation in the future, the second considering the legal aspects.

 

An ethical perspective

By Professor John McDermid and Zoe Porter, University of York

A possible unintended consequence of introducing autonomous systems is the creation of “responsibility gaps” — which can be characterised simply as a situation where nobody has enough control over the system’s actions to assume responsibility for them [12].

It may yet be that the operator of the Uber vehicle will be deemed to have liability — but for now consider the possibility that the operator is found to be blameless. This would mean that no one was liable for the fatality (Volvo’s safety systems were turned off, so liability should not fall on them).

Morally, this seems very unsatisfactory.

Whilst there is no intent to initiate a “witch hunt” it would seem strange, to say the least, if the introduction of autonomous systems meant that normal concepts of product liability — where manufacturers (and others) are held responsible for injuries caused by products they sell — are rendered irrelevant or inapplicable.

Is it not ethically undesirable to deploy a system that can cause harms for which no one is responsible?

Of course, it is possible that the operator will be found liable. This is then not a responsibility gap, but it might also be problematic.

Discussions with experts in the automotive sector and with cognitive psychologists suggest that someone working in “monitoring mode” (such as the Uber operator) can only concentrate on the task for 10–30 minutes (at most). In Tempe, the operator had been at the wheel for 19 minutes and was already on the second loop of the test circuit.

Further, work by Volvo testing drivers’ reactions to situations where they needed to initiate emergency braking, found interesting and relevant results. The drivers were told that the autonomous car they were in would not respond to emergency situations and they needed to be vigilant and apply the brakes. In their experiments, Volvo found that one third of the drivers responded promptly to the emergency; one third of the drivers did apply the brakes but after a delay waiting to see “what the autonomy would do”, and the final third did not apply them at all, not wanting to “interfere with” the autonomous systems. Of course, this is just one data point — although the experiment has been well conducted and reported — and it is consistent with expert opinion.

Morally, this also seems unsatisfactory. It points to a violation of the “ought implies can” principle in ethics, such that an agent is only obliged to perform an action that it is possible for him or her to perform. Should we be asking someone to take responsibility for something that we know there is a high probability that they cannot do? And this where life-and-death actions are concerned?

We might also enquire if it is appropriate for the legal framework to admit situations where no-one is held liable for events that the public (society) would deem unacceptable? Thus, there seem to be two key messages for regulators and legislators:

  • Use the concept of “responsibility gaps” as a criterion for checking regulations and legislation — if the laws or rules mean that it would not be possible to “fix responsibility” for an act that society would find morally reprehensible, then consider ways in which they can be reframed to remove or minimise those gaps;
  • Review any situation where responsibility falls on someone operating in “monitoring mode” and seek to minimise such situations, or to introduce requirements or guidelines to automate such functions (if that can be done adequately) or to provide effective prompts to operators (recall that there were no prompts in the Tempe case).

This isn’t a panacea, but it might help to avoid legal or regulatory frameworks that society finds unacceptable and thus impedes the introduction of beneficial technologies. It may be relevant to the UK Law Commission who are currently considering legislation on autonomous vehicles.

 

A legal perspective

By Phillip Morgan, University of York

Criminal Prosecutors in Arizona have dropped the case against Uber resulting from the crash which killed Elaine Herzberg.

Whilst this is a failure to bring a criminal prosecution against Uber, it does not mean that Uber is going to walk away free from all forms of legal liability for Elaine Herzberg’s death.

Uber’s employee operator of the vehicle may face criminal prosecution for manslaughter for their inattentiveness which resulted in Herzberg’s death. Apart from the driver it is unlikely that any other member of Uber’s staff, such as their design team or other members of their testing team committed a criminal act.

Whilst a company such as Uber is a legal person, and thus may itself be prosecuted for criminal acts (normally resulting in a fine — you can’t jail a company) the vehicle itself cannot be criminalised since it is not a legal person. To criminalise a vehicle would be as illogical as criminalising a fridge-freezer.

Whilst the reasons for dropping the criminal case against Uber have not been made public, the reality is that this case against Uber was likely to be a non-starter from the outset. Bringing successful criminal prosecutions against companies such as Uber (as opposed to individuals) for corporate manslaughter or gross negligence manslaughter is notoriously difficult. Corporate manslaughter prosecutions in the UK, for example, are rare since it requires the fault of senior management.

But crime isn’t the only part of the story. Not all legal wrongs are criminal, and it doesn’t mean that Uber is going to walk away scot-free from the accident.

Not all who cause the deaths of others are criminalised, even if they are at fault. The standard of fault required for criminalisation is much higher than the standard of fault required to be civilly liable to pay compensatory damages to the victim. For instance, if a person (A) negligently causes the death of another (B), but A’s level of culpability for B’s death is not such as to make it a criminal act then it’s not a crime, and A is not a criminal. However, A will find themselves civilly liable in the law of tort to pay damages for their negligence to the estate of B and their dependents.

In this case Uber is likely to face claims in tort for its own wrongs. These claims are far less likely to be dismissed than a criminal prosecution. The claims are likely to focus on the design and testing processes of the vehicle, and the training and monitoring of the safety drivers. However, such a claim may face a number of uphill battles as many of these principles are untested in the autonomous vehicle context, and will require significant technical expertise and evidence.

Some jurisdictions have introduced legislation to deal with the problems encountered in bringing civil claims for autonomous vehicle accidents, such as Tennessee’s Senate Bill 151, and the UK’s Automated and Electric Vehicles Act 2018. However, outside of such jurisdictions claimants will need to rely on the ordinary principles of tort law.

Nevertheless, with the Uber car in the present case there is a short cut to civil liability. To the extent that the operator of the vehicle is at fault in failing to prevent the accident, Uber is likely to be vicariously liable for their wrong. This means that Uber, as the driver’s employer, pays for the driver’s negligence.

Where there is an operator or safety driver this is likely to produce a short cut for victims to secure compensation. In such cases a manufacturer defendant might simply settle the case, and pay damages to the claimant without the need for a trial. There’s little point in defending a case you are highly likely to lose, and which will result in much negative publicity. Media reports suggest that Uber has settled the civil case with Elaine Herzberg’s family. The details of the settlement, as with most settlements, are not public.

 

Read the blog on Medium