Out of isolation: a white paper for regulators and technology developers that considers how an AI healthcare product will be used as part of the clinical system.

The HF/AI project collaborated with key stakeholders to research and publish a new white paper setting out a human factors perspective on the use of artificial intelligence (AI) applications in healthcare.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

An electronic tablet showing an x-ray

White paper

Published by the CIEHF, this white paper represents the outcomes of the HF/AI project.

HF/AI white paper

The challenge

AI healthcare tools are often developed without considering how they will be used as part of a complex clinical system. The human factors (HF) approach places emphasis on the interactions between people and the AI tool to inform design of AI that promotes human autonomy through high levels of automation, i.e. human autonomy and machine autonomy. 

The research

This project developed guidance in the form of a white paper for regulatory bodies and technology developers on HF in the design and use of AI applications in healthcare settings. The project had two parallel and interacting workstreams:

  • Stakeholder engagement - key stakeholders from regulatory bodies helped to identity user needs with respect to HF guidance and provide feedback as the white paper was developed
  • Guidance development – the white paper was shaped and developed through participation in the Chartered Institute of Ergonomics and Human Factors (CIEHF) special interest group on Digital Health and AI and existing literature on human factors

The results

Through collaboration with key stakeholders including the Chartered Institute of Ergonomics and Human Factors (CIEHF), the Australian Alliance for AI in Healthcare (AAAiH) and the Society for Health Care Innovation (SHCI) the project team developed a new white paper that promotes systems thinking among those who develop, regulate, procure, and use AI applications in healthcare, and to raise awareness of the role of people using or affected by AI.

The primary vehicle for developing the white paper was the CIEHF Digital Health and AI Special Interest Group (SIG). The white paper was published by CIEHF in September 2021. It outlines eight key HF principles that need to be taken into consideration for the successful design and use of AI in healthcare:

  1. situation awareness
  2. workload
  3. automation bias
  4. explanation and trust
  5. human-AI teaming
  6. training
  7. relationships between staff and patients
  8. ethical issues
Autonomous healthcare assistant by a patient's bedside

How does addressing human factors from a systems perspective help us assure the safety of AI in healthcare?

Find out more from project PI, Dr Mark Sujan

Project partners

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH