This  is exploring the strategies and processes that enable reporting and learning from AS safety incidents, focusing on the domain of healthcare where many of these challenges are particularly pronounced. The project team is engaging with key national bodies and regulators to identify and define a set of principles, methods and functions that support healthcare organisations and regulators to rigorously analyse and learn from AS incidents. 

The challenge

A critical part of assuring the safety and social acceptability of autonomous systems (AS) is ensuring that there are robust and trusted processes in place to identify, understand and learn from safety incidents and other adverse events. One particularly urgent challenge concerns how safety incidents and adverse events associated with these systems should be identified, reported, analysed and investigated to ensure that incidents are effectively learnt from and safety is continuously improved. However, analysing and learning from AS safety incidents faces a range of challenges including the inherent opacity of some technologies underpinning AS, the potential for novel and unforeseen failure modes, the breadth of specialist knowledge and expertise needed to understand AS incidents, and the wide variety of stakeholders and agencies with a responsibility for the safety and regulation of emerging autonomous technologies.

The research

The project will develop three sets of deliverables across four interrelated phases of work. 

The deliverables are: 

  1. Mapping out key incident types, data requirements and methods relevant for the analysis and monitoring of AS incidents in health and care. 
  2. Identifying practical models and strategies for investigating and responding to AS incidents.
  3. Defining the core organisational components, functions and interfaces of an infrastructure for learning from AS failures. 

The four stages are centred on the collaborative creation and simulated exploration of a hypothetical AS failure scenario. They are: 

Stage 1. Develop AS failure scenario

Identify and develop a hypothetical major AS failure scenario (eg missed diagnosis in medical image screening across multiple patients). Map key contributory factors and risk controls, spanning technical, organisational and regulatory factors. 

Stage 2. Review incident types, data and methods

Identify precursor events associated with failure scenarios and which illustrate reportable AS incident types. Determine types of safety data needed by different participants (e.g. developers, providers, regulators, investigators). 

Stage 3. Explore AS incident investigation models

Explore the methods and approaches to investigating AS safety incidents, simulating end-to-end processes across all levels of the health system. Define data required, participants to be engaged, and skills needed throughout response processes.

Stage 4. Define organisational functions for learning from AS incidents 

Consolidate findings and define organisational infrastructure and functions required to support learning from AS incidents (e.g. professional roles, policy priorities, just culture characteristics, etc). 

This failure scenario is being developed in collaboration with the UK’s Care Quality Commission (CQC), in light of recent experience regulating AI-based care providers, and is expected to focus on the application of AS in radiology and medical imaging. A range of stakeholders will then be engaged to collaboratively explore and collectively analyse how the illustrative AS failure scenario should be engaged with, including CQC and its regulatory partners as well as healthcare organisations and technology developers. 

The results

The early phases of this project have explored the types of safety incidents and risk events that might be associated with different forms of autonomous systems in a range of health and care settings in future, as well as an initial exploration of the range of sociotechnical precursor events that might contribute to these events. The most recent work on this project has deepened this analysis by identifying and focusing particularly  on image and diagnostic pathways and the risks and safety factors associated with autonomous systems in those contexts. These sources of risk and safety are being analysed by regulators, clinicians and other stakeholders to collaboratively develop a detailed hypothetical autonomous system failure scenario, and map the range of precursor events, consequences, and risk controls that may be associated with these. This hypothetical failure scenario and associated materials will be further explored and refined to act as the basis for a simulated incident response and investigation process to explore the activities and functions that can enable learning from autonomous system safety incidents in healthcare settings.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH