Overcoming barriers to assurance and regulation of robotics and connected autonomous systems.

RCAS increasingly employ methods such as AI and Machine Learning to adapt to their environment autonomously, but there are no universally accepted means of assuring their safety. Major advances are therefore needed to accommodate such technologies, and to combat the potential failure modes. This involves both focused research and the ability to validate approaches on real-world systems

An additional challenge is that safety engineering is traditionally analytic, and carried out off-line; this is not a viable approach for RCAS that learn in operation, especially where there is human-robot-interaction. Safety assessments have to be updated dynamically, by monitoring and interpreting operational data, providing dynamic approaches to safety assurance.

Research challenges

  • Developing assurance methods for AI and ML, both addressing their unique capabilities and unique failure modes, eg in scene analysis for automated vehicles.
  • Developing new approaches for the design of dynamic safety assurance techniques, eg dynamic safety cases.
  • Establishing effective means of evaluating RCAS in support of assurance, including dealing with human-RCAS interaction and the safety impact of cyber security weaknesses.

Research focus: the Assuring Autonomy International Programme

Research outputs

Research Lead: Professor John McDermid OBE FREng

John McDermid became Professor of Software Engineering at the University of York in 1987. His research covers a broad range of issues in systems, software and safety engineering. He became Director of the Lloyd’s Register Foundation funded Assuring Autonomy International Programme in January 2018, focusing on safety of robotics and autonomous systems.

He acts as an advisor to government and industry and is actively involved in standards development, including work on safety and software standards for civilian and defence applications.

Visit John's profile