Out of isolation: developing guidance for regulators and technology developers that considers how an AI healthcare product will be used as part of the clinical system.

The challenge

AI healthcare tools are often developed without considering how they will be used as part of a complex clinical system. In order to ensure such tools fulfil their potential a human factors (HF) approach places emphasis on the interactions between people and the AI tool. Rather than approaching the introduction of AI into clinical systems as a trade-off between human autonomy and machine autonomy, HF aspires to inform design of AI that promotes human autonomy through high levels of automation, i.e. human autonomy and machine autonomy. 

The research

This project developed guidance in the form of a white paper for regulatory bodies and technology developers on HF in the design and use of AI applications in healthcare settings.

The project consisted of two parallel and interacting workstreams:

  • Stakeholder engagement - key stakeholders from regulatory bodies supported three stages of work:
    • identification of user needs with respect to HF guidance
    • providing feedback on the guidance/white paper
    • dissemination of the white paper
  • Guidance development – the white paper was shaped and developed through participation in the Chartered Institute of Ergonomics and Human Factors (CIEHF) special interest group on Digital Health and AI and existing literature on human factors.

The outcomes

Through collaboration with key stakeholders including the Chartered Institute of Ergonomics and Human Factors (CIEHF)Australian Alliance for AI in Healthcare (AAAiH) and the Society for Health Care Innovation (SHCI) the project team developed a new white paper intended to promote systems thinking among those who develop, regulate,  procure, and use AI applications in healthcare, and to raise awareness of the role of people using or affected by AI.

The primary vehicle for developing the white paper was the CIEHF Digital Health and AI Special Interest Group (SIG). The white paper was published by CIEHF in September 2021. Its structure is as follows:

  1. The scope of, and case for, a HF white paper: describing the purpose, the target audience, the need, and the vision.
  2. The use of AI in health and social care – current landscape: describing the current situation. The project team has undertaken a mapping exercise where they (a) developed a taxonomy and then (b) mapped against this taxonomy all of the projects funded under the second AI for Health and Care Award.
  3. HF basics and the systems perspective: describing the basic definition of HF, its focus on interactions, and its dual aim of improving system performance and wellbeing. The systems perspective will be described using the Systems Engineering Initiative for Patient Safety (SEIPS) model.
  4. Levels of automation taxonomies and models of human information processing as a simple framework for thinking about different ways of embedding AI in clinical systems. 
  5. HF topics to look out for: situation awareness, workload, automation bias, training, and relationships.

Download the white paper

Papers and presentations

Project partners

Contact us

Assuring Autonomy International Programme
assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Department of Computer Science, Deramore Lane, University of York, York YO10 5GH

Contact us

Assuring Autonomy International Programme
assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Department of Computer Science, Deramore Lane, University of York, York YO10 5GH