What if we could improve patient outcomes by using automated systems to administer just the right amount of medication at the right time to patients in intensive care units?
The SAM project focused on how autonomy could be safely introduced to the use of intravenous (IV) medication preparation, administration and management and the types of safety assurance needed by patients, healthcare professionals, and regulators.
IV medication preparation, administration and management are complex, and many errors are routinely made. Highly automated technology, and technology that takes decisions independently of healthcare professionals, could help to provide personalised treatments while reducing errors.
Considering IV medication management within an ICU setting, this project focused on three areas:
- What kinds of safety assurance do patients, healthcare professionals and regulators require?
- How well are current safety assurance methods able to evaluate highly automated intravenous infusion technology?
- Can these methods form the basis for assurance strategies that are able to satisfy the assurance needs of the different stakeholders?
It is only natural that patients and their relatives have concerns where autonomous devices are used in a medical setting. I hope that patient involvement in the project helped to allay these concerns, because there is a real and vital need for this technology in the treatment of patients in ICU.
Howard Grundy, patient’s relative
The focus of the study was the clinical system rather than the technology as such, and it looked at safety assurance challenges at the intersection of engineering and human factors.
Four use scenarios at different levels of automation and autonomy were identified. These were used to explore stakeholder perceptions about risk, handover, and the investigation of adverse events involving autonomous infusion devices.
Three complementary analysis approaches (Functional Resonance Analysis Method, Systematic Human Error Reduction and Prediction Approach, NHS Digital SMART approach) were used to explore the safety issues around the use of autonomous infusion technology in intensive care.
The project has made six recommendations aimed at technology developers, healthcare providers and regulators:
- Developers should consider the patient experience and the impact on the patient-clinician relationship.
- Adoption of RAS should be accompanied by training to enable clinicians to maintain core clinical skills, and to educate clinicians about limitations of AI.
- Healthcare providers should consider the introduction of new AI specialist roles.
- Hazard analysis should be performed at the level of the clinical pathway or clinical system.
- Developers should design for situation awareness, handover between clinicians and RAS, and human performance variability.
- Regulators should promote existing best practices and establish an integrated safety governance framework for AI regulation in healthcare.
The team established collaborations and partnerships with a number of bodies, including the Chartered Institute of Ergonomics and Human Factors (CIEHF), NHSX and BSI, where these recommendations are being considered further. This includes the publication of Human Factor in Healthcare AI, a white paper published by CIEHF, representing the outcomes of work by Dr Mark Sujan and colleagues as part of the HF/AI demonstrator project.
- Sujan, M., White, S., Habli, I., and Reynolds, N. "Stakeholder perceptions of the safety and assurance of artificial intelligence in healthcare" in Safety Science, November 2022
- Sujan, M. "Muddling through in the intensive care unit – A FRAM analysis of intravenous infusion management" in Braithwaite, Hollnagel, Hunte. Resilient Health Care, Volume 6. pp. 101 – 106. CRC Press, 2021
- Furniss, D., Nelson, D., Habli, I., White, S., Elliott, M. Reynolds, N., and Sujan, M. "Using FRAM to explore sources of performance variability in intravenous infusion administration in ICU: A non-normative approach to systems contradictions" Elsevier Applied Ergonomics, July 2020
- Sujan, M., Furniss, D., Grundy, K., Grundy, H., Nelson, D., Elliott, M., White, S., Habli, I., and Reynolds, N. “Human factors challenges for the safe use of AI in patient care.” BMJ Health and Care Informatics, 2019
- Safety and Security Culture: human factors of using AI in patient care - presentation at safety-meets-security, Stuttgart, November 2019
- Critical Barriers to Safety Assurance and Regulation of Autonomous Medical Systems, 29th European Safety and Reliability Conference (ESREL), September 2019
- Podcast - What role does design play in the mistakes we make? Dominic Furniss discusses his work to reduce human error through better design and procedures. 24 June 2019
- Using AI in Patient Care. Presentation at Risk and Safety Society (SaRS), London, 22 May 2019
- Safety of AI in healthcare. Presentation at EWICS meeting, Newcastle, 24 April 2009
- 14 April 2019 FT article "Autonomous machines: industry grapples with Boeing lessons" (interview with Mark Sujan - content requires paid for access)
- April 2019 The Ergonomist "Using AI in patient care" Mark Sujan and Dominic Furniss