How can explainability support the overall assurance of AVs?

It is critical that when AVs are deployed they are safe, accountable, and trustworthy. To be safe, AVs must identify, assess, and mitigate risks. However, to be accountable, they must do this in ways that users, developers, and regulators understand what the cars have seen, what they have done, what they are planning to do, and why.

Let us consider the recent fatal crash of a self-driving car when it did not recognise a pedestrian (https://en.wikipedia.org/wiki/Death of Elaine Herzberg). Imagine what the car could explain to the driver before requesting assistance prior to the crash, what information developers would need when debugging causes, or what regulators would require when investigating the crash. Post-hoc explanations containing the vehicle’s observations of other road users, traffic signs as well as road rules it acted on can serve as evidence to the causes of an accident and inform the investigation.

Regulators require some form of interpretability and explainability. For example, the European Union’s General Data Protection Regulation (GDPR) and the European Parliament’s resolution on “Civil Law Rules on Robotics” guarantee meaningful information about the logic involved in certain automated decisions [1]. Moreover, the GDPR advocates the “right to explanation” as a potential accountability mechanism, requiring certain automated decisions (of AI and robotic systems) to be explained to individuals.

In the SAX project we have designed, developed, and evaluated technologies that allow AVs to understand their environment, assess risks, and provide causal explanations for their own decisions. We conducted a field study in which we deployed a research vehicle in an urban environment. While collecting sensor data of the vehicle’s surroundings, we also recorded an expert driver using a think-aloud methodology to verbalise their thoughts. We analysed the collected data to uncover the necessary requirements for effective explainability in intelligent vehicles. We show how intelligible natural language explanations that fulfil some of the key elicited requirements can be automatically generated based on observed driving data using an interpretable approach. These transparent and interpretable representations will enable developers to analyse an AV’s behaviour and assure its safe autonomous operation. Users will also benefit from explanations by developing trust in autonomous vehicles.

Dr Lars Kunze
Departmental Lecturer in Robotics
Oxford Robotics Institute

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH