Blog post: How safe is safe enough? Building trust in automated driving systems

News | Posted on Monday 8 June 2020

This is the second in a series of blog posts exploring the safety assurance of highly automated driving to accompany a new AAIP report which is free to download on the website.

Human error can be attributed to more than 90% of accidents on the road. Automated driving systems have the potential to make roads significantly safer by restricting the impact of potentially inattentive and unreliable human drivers. Yet, as discussed in my previous blog post, these systems also introduce new classes of risk. By transferring the decision function from the driver to the machine they also raise ethical questions.

Several factors will impact whether or not society at large will trust these systems. The level of acceptable residual risk associated with the introduction of the new technology will also be seen within the context of the perceived safety benefits of the function itself.

It is our task as automotive safety engineers to deliver safety arguments for the system that are convincing, objective and sound and that can be understood and accepted by governing authorities and the public at large.

Safer than a human?

The starting point of any safety argument is some definition of the safety claim that is being made. In other words, how “safe” do we argue the system to be.

In 2016, the German ministry of transport and digital infrastructure commissioned a report into ethical considerations of automated driving. A recommendation of the report was that it must be shown that the automated driving systems perform, on average, better than a human driver in terms of avoiding or mitigating hazardous situations. Although in some cases it may be acceptable that the performance is slightly worse than a human so long as an overall “positive risk balance” is achieved.

A related approach based on the definition in French “Globalement au moins aussi bon”, or GAMAB for short, refers to the principle that any new system must be at least as good as any previous system it replaces. Although superficially this could be used to argue the risk equivalence to average human drivers, it could also easily well be argued that automated driving systems are not a replacement of the human driver but are instead a fundamentally new technology.

Statistics as a measure of safety

Arguing an overall positive risk balance is akin to an “average utilitarianism” view to introducing the technology. However, this would signal a departure from existing approaches to arguing the functional safety of electrical and electronic systems and may not be sufficient to ensure societal and legal acceptance when confronted with avoidable accidents and fatalities directly caused by the automated driving system. In the end, risk will always be subjectively perceived through a set of culturally specific filters.

The principle of ALARP (as low as reasonably practicable), or variants thereof, are often used in the regulation of safety-critical systems. The ALARP approach to risk assessment involves demonstrating that the cost involved in further reducing the risk would be disproportionate to the benefit gained. These judgements are typically not only made based on quantitative assessments but also an understanding of good engineering practice and existing standards. If it could be argued therefore that applying existing standards and good engineering practice could result in significantly better performance than an average human driver, then a direct comparison to current accident statistics may not be sufficient.

In other words, regardless of the statistical probability of a hazardous event happening, we have a duty to minimise risks that could be avoided by applying a more rigorous development process.

Consider the case of an automated driving system that has an increased chance of misclassifying and therefore colliding with highway construction workers. This event would occur statistically seldom in comparison to the many kilometres of highway where no construction workers are present. Nevertheless, it is unlikely to be tolerated either in a court of law or from society at large.

An additional counter argument for relying purely on (accident) statistics as a measure of safety is that it will be impossible to gather such statistical evidence directly upfront before a system is released. Studies have shown that an equivalent of hundreds of millions or even billions of kilometres of tests would be required to argue an equivalent level of safety to an average driver. These tests would need to be repeated with every adaptation of the system. The number of accident-free kilometres of driving alone is therefore not a suitable performance indicator to use for a safety release of the system.

The ethics commission report, however, did not only focus on positive risk balance as a measure of an ethically acceptable level of safety. It also places emphasis on:

  • the application of proactive driving behaviour;
  • avoidance of accidents as much as “practically possible”; and
  • the avoidance of discrimination on the basis of any person-related characteristics.

An over-simplistic interpretation of the recommendations should be therefore avoided.

We need to find some balance between qualitative and quantitative arguments for the safety of the systems.

Qualitative claims with quantitative evidence

My conclusion is that the top-level safety claims that our assurance cases argue should be qualitative in nature based on a definition of safe driving behaviour but supported by statistical evidence.

This behaviour could include the following principles:

1). Maintain a proactive driving style:

  • Employ an anticipatory and predictable driving style — avoid hazardous scenarios
  • Maintain legal compliance

2). Ensure a reactive driving style:

  • In case of violations of laws and regulations by other road users, the system reconstitutes its legal compliance
  • If this is not possible, or other road users, animals or objects cause a hazard, prevent a possible accident or mitigate the damage

As we will see in the following blog posts, the evidence used to support how well these claims are achieved will include quantitative statements based on statistical analysis that reinforce the confidence in our arguments and help to illustrate the level of residual risk achieved.

In addition, though, a broad range of evidence should be presented based on engineering rationale as part of the system design as well as verification and validation activities. We will also see that the formulation and verification of the overall safety requirements on the vehicle itself derived from such safe driving principles is also no trivial task.

The overall goal itself should not be to provide some arbitrary statistic though. Instead, a structured and convincing argument must be made that safe driving principles are met under all foreseeable circumstances within the scope of operation based on what is reasonably practicable. As we shall see in the following blogs, the tensions between the qualitative and quantitative approaches become easier to resolve the more we apply a systematic analysis to better understand the scenarios in which the system operates as well as the root causes for risk within the system.

You can download a free introductory guide to assuring the safety of highly automated driving: essential reading for anyone working in the automotive field.

 

Dr Simon Burton
Director Vehicle Systems Safety
Robert Bosch GmbH

Simon is also a Programme Fellow on the Assuring Autonomy International Programme.

Read the blog on Medium