How does addressing human factors from a systems perspective help us assure the safety of AI in healthcare?

AI in healthcare is big news. Every day we see another story about a new app or system and we’ve seen some very encouraging results, particularly in diagnostics such as breast cancer screening. This is really positive, and this research could make a real difference.

We need to be aware that most examples of healthcare AI to date have been evaluated retrospectively. So, what we’re really seeing are AI technologies that, in isolation and based on high-quality data, perform well.

The issue is when AI technology is used in a complex context such as a hospital. Focusing only on the technology during the development phase could lead to an unsafe situation when introducing it into a complex clinical setting. By using a human factors and ergonomics (HF/E) approach we take a systems perspective to technology development to help ensure the AI works as expected out of isolation and in the real world.

To support developers and other healthcare stakeholders, with colleagues I wrote a white paper published by the Chartered Institute of Ergonomics and Human Factors. The paper outlines eight HF/E principles to consider when designing an AI healthcare application to help assure its safety.

These considerations include well-understood principles from experiences with highly automated systems introduced from the 1970s onwards, such as workload, which already have methods and frameworks associated with them. Other principles covered in the white paper also exist with automated systems but are changing and becoming more complex because of the introduction of AI. For example, situation awareness: with the introduction of AI, both the human and the system need awareness, and technology developers must consider how the AI develops this awareness and communicates it to others.

There are also entirely new principles or ones that are more relevant because of AI. In particular, the relationship between staff and patients. When a nurse checks an infusion pump it’s about much more than a technical adjustment. It’s about checking in and finding out how the patient is really feeling, and patients and staff think this is really important.

At its heart, healthcare is a relationship between the patient and the clinical team: it’s about humans. AI can support this, but the technology must be right, not just in isolation but also in the messy, complex system that is a hospital. Technology is one part of the story. To assure the safety of AI in healthcare, we must remember the human and a systems perspective introduced through HF/E is the way to do this.

Dr Mark Sujan
Director
Human Factors Everywhere

PI of the HF/AI project

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH