Case study

Seeking safety in autonomous systems

We’re assuring the safety of autonomous systems to give confidence that they will work safely in even the most challenging environments.

The issue

Autonomous systems and artificial intelligence have the potential to revolutionise our lives and bring great benefits to society. But they also pose a challenge: we have to be confident that advances such as driverless cars and autonomous systems in hospitals and the maritime sector will operate safely and reliably.

Self-driving cars, for instance, must be able to differentiate between different types of objects - cars, bikes, and pedestrians - while at the same time avoiding false alerts. The challenge for our computer scientists is to develop methodologies and processes that can be used by developers and regulators to demonstrate that the autonomous system is safe and regulate its safe use.

The research

York is spearheading research in this area, working with industry partners and across disciplines on research that has a real-world impact. They have developed AMLAS, the first methodology to enable developers to explicitly and systematically establish justified confidence in machine learnt (ML) components in an autonomous system.

Developed by our Assuring Autonomy International Programme (AAIP), a £12M programme funded by the Lloyd's Register Foundation, the framework helps to develop a compelling argument about the ML in an autonomous system. It incorporates a process for systematically integrating safety assurance into the development of ML components. It allows developers to embed safety from the beginning, rather than adding it at the end of development.

The outcome

"...AMLAS has made an important contribution to safety case patterns and provided valuable guidance on how to proceed to justify the safety of machine learning components." - Lydia Gauerhof, Research Engineer, Robert Bosch GmbH

The AMLAS guidance has been used across the globe in numerous sectors and settings. It has been applied by a partnership project led by NHS Digital, which found that AMLAS aligns closely with real-world clinical practice. The team is developing supplementary guidance to support work to establish the use of the AMLAS guidance in the development and introduction of ML-based systems in healthcare.

A team of engineers developing small autonomous satellites to help detect and predict wildfires has also benefited from the AMLAS methodology. The guidance has enabled them to demonstrate confidence in the safety of their ML models, instilling confidence for both the internal team and the company’s customers.

Additional research being undertaken by the AAIP feeds into a need for an overall framework for the safety assurance of autonomous systems. Their collaborative, evidence-based work is focused on four key areas in addition to the AMLAS work already published:

  • considering how the system will work out of isolation in a complex, real-world environment
  • assuring how the system understands the world around it
  • giving confidence in how an autonomous system makes decisions
  • understanding the ethical acceptability of autonomous systems

The team is also starting to consider how we govern the development and use of autonomous systems. The multidisciplinary Assuring Responsibility for Trusted Autonomous Systems project is establishing who is responsible for the decisions and outcomes of autonomous systems. This is a crucial element of their trustworthy governance.

View the AMLAS guidance

Featured researcher

John McDermid

Professor McDermid’s research interests are in high integrity computer systems, especially in safety and security. His work has influenced industrial practice both directly and via standards.

View profile