What if we could improve patient outcomes by using automated systems to administer just the right amount of medication at the right time to patients in intensive care units?
Intravenous (IV) medication preparation, administration and management is complex, and many errors are routinely made. Highly automated technology, and technology that takes decisions independently of healthcare professionals, could help to provide personalised treatments while reducing errors.
Considering IV medication management within an intensive care unit, the challenge is focused on three areas:
- What kinds of safety assurance do patients, healthcare professionals and regulators require?
- How well are current safety assurance methods able to evaluate highly automated intravenous infusion technology?
- Can these methods form the basis for assurance strategies that are able to satisfy the assurance needs of the different stakeholders?
“It is only natural that patients and their relatives have concerns where autonomous devices are used in a medical setting. I hope that patient involvement in the project helped to allay these concerns, because there is a real and vital need for this technology in the treatment of patients in ICU.” Howard Grundy, patient’s relative
The focus of the study was the clinical system rather than the technology as such, and it looked at safety assurance challenges at the intersection of engineering and human factors.
Four use scenarios at different levels of automation and autonomy were identified. These were used to explore stakeholder perceptions about risk, handover, and the investigation of adverse events involving autonomous infusion devices.
Three complementary analysis approaches (Functional Resonance Analysis Method, Systematic
Human Error Reduction and Prediction Approach, NHS Digital SMART approach) were used to explore the safety issues around the use of autonomous infusion technology in intensive care.
The project has made six recommendations aimed at technology developers, healthcare providers and regulators:
- Developers should consider patient experience and the impact on the patient-clinician relationship.
- Adoption of RAS should be accompanied by training to enable clinicians to maintain core clinical skills, and to educate clinicians about limitations of AI.
- Healthcare providers should consider the introduction of new AI specialist roles.
- Hazard analysis should be performed at the level of the clinical pathway or clinical system.
- Developers should design for situation awareness, handover between clinicians and RAS, and human performance variability.
- Regulators should promote existing best practices and establish an integrated safety governance framework for AI regulation in healthcare.
During the course of the project, the team established collaborations and partnerships with a number of bodies, including Chartered Institute of Ergonomics and Human Factors, NHSX and BSI, where these recommendations will be considered further.
“The work on human factors in the design and use of autonomous infusion pumps provided a concrete, real world use case to focus The Chartered Institute of Ergonomics and Human Factors (CIEHF) contribution in the area of digital health and artificial intelligence. We have now established an active community of ergonomists, clinicians, technology developers and regulators. They are continuing to build national and international partnerships to provide guidance on current issues such as the role of AI in addressing global health inequalities.” Dr Noorzaman Rashid, CEO Chartered Institute of Ergonomics and Human Factors
Body of Knowledge guidance
- 1.1 Identifying hazards
- 1.1.2 Defining the operating environment
- 1.1.3 Defining operating scenarios
- 1.2 Identifying hazardous system behaviour
- 1.2.1 Considering human-machine interaction
- 1.3.1 Validation of safety requirements
- 2.6 Handling change during operation
- 14 April 2019 FT article "Autonomous machines: industry grapples with Boeing lessons" (interview with Mark Sujan - content requires paid for access)
- April 2019 The Ergonomist "Using AI in patient care" Mark Sujan and Dominic Furniss
Presentations, papers and books
- Sujan, M. "Muddling through in the intensive care unit – A FRAM analysis of intravenous infusion management" in Braithwaite, Hollnagel, Hunte. Resilient Health Care, Volume 6. pp. 101 – 106. CRC Press, 2021
- Furniss, D., Nelson, D., Habli, I., White, S., Elliott, M. Reynolds, N., and Sujan, M. "Using FRAM to explore sources of performance variability in intravenous infusion administration in ICU: A non-normative approach to systems contradictions" Elsevier Applied Ergonomics, July 2020
- Sujan, M., Furniss, D., Grundy, K., Grundy, H., Nelson, D., Elliott, M., White, S., Habli, I., and Reynolds, N. “Human factors challenges for the safe use of AI in patient care.” BMJ Health and Care Informatics, 2019
- Safety and Security Culture: human factors of using AI in patient care - presentation at safety-meets-security, Stuttgart, November 2019
- Critical Barriers to Safety Assurance and Regulation of Autonomous Medical Systems, 29th European Safety and Reliability Conference (ESREL), September 2019
- Podcast - What role does design play in the mistakes we make? Dominic Furniss discusses his work to reduce human error through better design and procedures. 24 June 2019
- Using AI in Patient Care. Presentation at Risk and Safety Society (SaRS), London, 22 May 2019
- Safety of AI in healthcare. Presentation at EWICS meeting, Newcastle, 24 April 2009