
Assurance
Overcoming barriers to assurance and regulation of robotics and connected autonomous systems.
RCAS increasingly employ methods such as AI and Machine Learning to adapt to their environment autonomously, but there are no universally accepted means of assuring their safety. Major advances are therefore needed to accommodate such technologies, and to combat the potential failure modes. This involves both focused research and the ability to validate approaches on real-world systems
An additional challenge is that safety engineering is traditionally analytic, and carried out off-line; this is not a viable approach for RCAS that learn in operation, especially where there is human-robot-interaction. Safety assessments have to be updated dynamically, by monitoring and interpreting operational data, providing dynamic approaches to safety assurance.
Related links

Projects from the Assurance pillar
Research challenges
- Developing assurance methods for AI and ML, both addressing their unique capabilities and unique failure modes, eg in scene analysis for automated vehicles.
- Developing new approaches for the design of dynamic safety assurance techniques, eg dynamic safety cases.
- Establishing effective means of evaluating RCAS in support of assurance, including dealing with human-RCAS interaction and the safety impact of cyber security weaknesses.
Research focus: the Assuring Autonomy International Programme
Research outputs
- Guidance on the Safety Assurance of autonomous systems in Complex Environments (SACE) - the first methodology that defines a detailed process for creating a safety case for autonomous systems. It takes the autonomous system and its environment and defines a safety process that leads to the creation of a safety case for the system.
- Methodology for the Assurance of Machine Learning for use in Autonomous Systems (AMLAS) - a clear and detailed assurance process that complements machine learning (ML) development and generates the evidence needed to justify the safety of ML components.
- Safety of highly automated driving - a report exploring the challenges involved in assuring the safety of highly automated driving systems. It presents a framework for structuring key elements of the argumentation strategy and reviews the state-of-the-art aligned to each element of the framework.
Research Lead: Professor John McDermid OBE FREng
John McDermid became Professor of Software Engineering at the University of York in 1987. His research covers a broad range of issues in systems, software and safety engineering. He became Director of the Lloyd’s Register Foundation funded Assuring Autonomy International Programme in January 2018, focusing on safety of robotics and autonomous systems.
He acts as an advisor to government and industry and is actively involved in standards development, including work on safety and software standards for civilian and defence applications.
