Case study

Seeking safety in autonomous systems

We’re using machine learning to make sure autonomous systems work safely in even the most challenging environments.

The issue

Autonomous systems and artificial intelligence have the potential to revolutionise our lives and bring great benefits to society. But they also pose a challenge: we have to be confident that advances such as driverless cars and autonomous systems in hospitals and the maritime sector will operate safely and reliably at the right time, every time.

Self driving cars for instance, have to be able to differentiate between different types of objects - cars, bikes, and pedestrians while at the same time avoiding false alerts. The challenge for computer scientists is to develop safety systems that can monitor and predict the behaviour of objects to ensure safety for drivers and pedestrians.

The research

York is spearheading research in this area, working with industry to develop a framework which uses machine learning to ensure autonomous systems work safely in even the most complex environments. The research also aims to ensure the system safety case can be automatically updated as systems develop and evolve, rather than traditional approaches which involve safety systems being developed in advance and remaining largely unresponsive to changing circumstances.

Developed by our Assuring Autonomy International Programme the framework encourages continual assessment and analysis of operational data to enable systems to remain safe as they ‘learn’. It allows developers to model data from the real world plus data from imagined and observed conditions. The aim of the research is to close the gaps between the imagined lab-based conditions and the actual reality of the conditions the system will operate in.

The outcome

The research is still evolving. The framework has been applied to a Health IT (HIT) system, and this work found unexpected new patterns of working for nurses involved in post-operative care. The application of the framework now needs extending from this healthcare system to an autonomous system.

Additional research being undertaken by the Assuring Autonomy International Programme also feeds into this overall framework for safety assurance of autonomous systems. This includes work on ‘confidence arguments’ for the machine learning algorithms that are used in autonomous systems. There is currently no consensus regarding what verification measures are needed to demonstrate confidence in the safety of such algorithms. This is being considered using the automotive domain as a case study.

The team have also done further work into the issue of the gaps that arise in the development of autonomous systems. This multidisciplinary work looks at the gaps that arise in the development of autonomous systems and considers emerging ways of reducing these gaps.

Featured researcher
John McDermid

John McDermid

Professor McDermid’s research interests are in high integrity computer systems, especially in safety and security. His work has influenced industrial practice both directly and via standards.

View profile

Case studies

Read more examples of York research making a difference.

Explore case studies

Computer Science

Explore more research from the Department of Computer Science.

Find out more