The research we are doing in York is focused on core technical issues arising from the use of robotics and autonomous systems in critical applications.
Our foundational research in 2019 centred on demonstrating, with sufficient confidence, that machine learning (ML) components can perform their tasks safely (i.e. with the risk of human harm as low as is reasonably practicable).
To do this we have developed a safety assurance process: AMLAS (Assurance of Machine Learning for use in Autonomous Systems). This process is for the engineering of ML components in which the assurance evidence can be generated at each stage of the ML lifecycle. It gives the first systematic, documented approach to safety assurance of ML components, with the aim of giving others the confidence they need to use, certify or regulate the component or the system it is part of.
Click the link below to open up a PDF showing the AMLAS process and how it is used to create safety arguments.