New system-level guidance on assuring the safety of autonomous systems published

News | Posted on Thursday 11 August 2022

Safety experts from the Assuring Autonomy International Programme have published their latest guidance to support system developers, safety engineers, and assessors to design and introduce safe autonomous systems.

Overview of the SACE process

The Safety Assurance of autonomous systems in Complex Environments (SACE) guidance is the first methodology that defines a detailed process for creating a safety case for autonomous systems. It takes the autonomous system and its environment and defines a safety process that leads to the creation of a safety case for the system.

An autonomous system, such as a self-driving car or clinical support tool, does not operate in isolation: it is part of a complex environment. An autonomous vehicle will interact with other cars, traffic lights and street infrastructure, as well as with humans. An AI tool in healthcare will become part of a complex healthcare pathway that could involve numerous clinical staff and care options.

“The autonomous system, its environment, and the interactions that take place between actors in the environment must all be part of the assurance process, in order to demonstrate that the system is safe,” said Dr Richard Hawkins, Senior Research Fellow at the Assuring Autonomy International Programme and one of the authors of the guidance.

“Our Safety Assurance of autonomous systems in Complex Environments (SACE) guidance enables engineers and assessors to take a holistic view of the system within its environment. They can follow the activities in the eight stages that make up the SACE process. This will lead to the creation of artefacts that can be combined to create a compelling safety case for the system.”

The team of safety experts behind the SACE guidance also published AMLAS, the leading methodology for assuring the safety of machine learning components within an autonomous system.

“Our new system-level guidance sits above the AMLAS methodology,” continued Dr Hawkins. “The ML safety case created by following AMLAS fits into the system safety case that you build by following our SACE guidance. The two parts work together to help create a coherent and compelling safety case for an autonomous system.”

The SACE guidance has been reviewed by a range of experts from industry and academia. It is available to download for free from the Assuring Autonomy International Programme website.

Download the SACE guidance