What if we could improve patient outcomes by using automated systems to administer just the right amount of medication at the right time to patients in intensive care units?
Intravenous medication preparation, administration and management is complex, and many errors are routinely made. Highly automated technology, and technology that takes decisions independently of healthcare professionals, could help to provide personalised treatments while reducing errors. This project will answer the following research questions:
- what kinds of safety assurance do patients, healthcare professionals and regulators require?
- how well are current safety assurance methods able to evaluate highly automated intravenous infusion technology?
- can these methods form the basis for assurance strategies that are able to satisfy the assurance needs of the different stakeholders?
This is a mixed-methods study that includes: evidence synthesis, qualitative interview study, case study to develop hazard and safety assurance arguments, and stakeholder engagement events.
Findings will be presented to patients, healthcare professionals, and manufacturers, and lessons learned will be shared with regulatory bodies.
The team have focused on the specific case of a patient who is in intensive care following sepsis secondary to pneumonia, and who requires blood sugar level control through intravenous insulin administration. Four use scenarios at different levels of automation and autonomy have been identified, ranging from the current use scenario to the scenario of an autonomous infusion device, which is able to dynamically adjust the insulin delivery based on an analysis of the patient’s physiological parameters.
These identified use scenarios are being used to explore stakeholder perceptions about risk perceptions, handover between the autonomous system and the clinician, and the investigation of adverse events involving autonomous infusion devices. Risk analyses of the four use scenarios will be undertaken and a safety assurance strategy developed.
The literature review for the project has found that there is little published evidence on attitudes and perceptions about the safety impact of using autonomous intravenous infusion devices in intensive care. However, this is a specific case of the broader theme around the impact of autonomous systems or artificial intelligence in patient care and the literature review is now focused on this broader topic, in particular around:
- situational awareness
- impact on human performance
- the role of the patient
The project team has applied traditional methods for the analysis of the existing infusion process using the specific case of insulin (bow-tie analysis, human reliability analysis). They have also applied a more recent systems-based analysis technique to understand performance variability (Functional Resonance Analysis Method – FRAM).
The team will study how the introduction of autonomous infusion devices might affect performance variability.
Interviews have taken place with 22 stakeholders (patients, clinicians, IT managers, technology developers, regulators) to discuss these issues in depth. The team has engaged with the professional body for human factors in the UK – the Chartered Institute of Ergonomics and Human Factors (CIEHF), and are working with CIEHF to produce guidance for the consideration of human factors in the development and use of AI in healthcare.
- 14 April 2019 FT article "Autonomous machines: industry grapples with Boeing lessons" (interview with Mark Sujan - content requires paid for access)
- April 2019 The Ergonomist "Using AI in patient care" Mark Sujan and Dominic Furniss
Presentations and papers
- Furniss, D., Nelson, D., Habli, I., White, S., Elliott, M. Reynolds, N., and Sujan, M. "Using FRAM to explore sources of performance variability in intravenous infusion administration in ICU: A non-normative approach to systems contradictions" Elsevier Applied Ergonomics, July 2020
- Sujan, M., Furniss, D., Grundy, K., Grundy, H., Nelson, D., Elliott, M., White, S., Habli, I., and Reynolds, N. Human factors challenges for the safe use of AI in patient care. BMJ Health and Care Informatics, 2019
- Safety and Security Culture: human factors of using AI in patient care - presentation at safety-meets-security, Stuttgart, November 2019
- Critical Barriers to Safety Assurance and Regulation of Autonomous Medical Systems, 29th European Safety and Reliability Conference (ESREL), September 2019
- Podcast - What role does design play in the mistakes we make?. Dominic Furniss discusses his work to reduce human error through better design and procedures. 24 June 2019
- Using AI in Patient Care. Presentation at Risk and Safety Society (SaRS), London, 22 May 2019
- Safety of AI in healthcare. Presentation at EWICS meeting, Newcastle, 24 April 2009