Skip to content Accessibility statement
Home>Assuring Autonomy International Programme>Impact>Lars Kunze ORI case study

If we could better understand the behaviours of autonomous vehicles, we may be more likely to accept their introduction. Dr Lars Kunze is a Departmental Lecturer in robotics at the Oxford Robotics Institute (ORI) where he leads the Cognitive Robotics Group (CRG), and is also an AAIP Fellow.

Lars studied Cognitive Science and Computer Science. His interest in the working of the mind and human decision-making led him to robotics and how they could communicate, explain, and rationalise their actions.

Lars led the AAIP demonstrator Sense-Assess-Explain (SAX)The project was a crucial step for both the Institute and Lars personally, enabling a far greater focus on safety than what had come before.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

It was a good fit for us. Explainability is critical if we are to gain the trust needed for autonomous vehicles to be used.

Sense-Assess-Explain

For the SAX project, ORI led a diverse team of software and hardware engineers, researchers, and specialist drivers. The aim of the project was to build robots, or autonomous vehicles, that can:

  • sense and fully understand their environment
  • assess their own capabilities
  • provide causal explanations for their own decisions

The SAX project extended methods for interpreting and representing an autonomous vehicle's observations of the environment in human-understandable terms. To this end, they have improved the way that traditional sensors (e.g., cameras, lasers) are used in complex and rare traffic situations.

The expert driver provided extensive commentary that the team were able to analyse and combine with data being gathered by the vehicle’s systems. This helped Lars and the team to look into explainability while driving on road. They used a technique called ‘commentary driving’ where you describe what you see, what you anticipate happening, then how you reacted to these situations.

The resulting ten hours of commentary (part of a wider dataset of 140 hours covering over 3,700 miles of on and off-road driving) was a valuable output from the project. It has the potential to be used to validate design and assurance capabilities.

The dataset could be used for validating and showing the system capabilities, for example, localisation across different areas. We wanted to look at a rich variety of environments from the city centre of London to the highlands of Scotland. The dataset including annotations and external sensor data will help to validate and assure performance.

Benefits of collaboration

As a result of the project, Lars was contacted by the European Commission and is now part of the expert group focusing on explainability for automated and autonomous driving for the Commission’s Joint Centre for Research.

Working together has also led to further collaboration. AAIP's Richard Hawkins and Lars are collaborating on a project for UKRI’s Trustworthy Autonomous Systems Hub. This follow-on project, led by Lars, explores ‘Responsible AI for Long-term Trustworthy Autonomous Systems’ (RAILS) and extends their investigations to maritime and the use of drones.

This connection with AAIP has clearly resulted in more funding and more collaborative work. It is now truly collaborative...Without the AAIP, being in the network and the visibility that created would have been missed. The Programme has been great at facilitating and disseminating our papers, and enabling opportunities to present at conferences

Find out more

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH