Moving on from prediction: the safe use of AI in making medical decisions about sepsis treatment.

This demonstrator project investigates the safety assurance of an AI-based decision support systems for sepsis treatment in intensive care, helping establish general regulatory requirements for these systems and using human expert knowledge to define safety rules.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

Project report

This report summarises an observational human-AI interaction study in a high-fidelity ICU simulation suite, at the Clinical Skills laboratory of Imperial College London.

Final Project Report

The challenge

Many AI-based healthcare systems have already been approved for clinical use (e.g. by the FDA). But these mainly focus on replicating predictive tasks usually performed by humans, such as classifying skin lesions or predicting renal failure.

The challenge is in developing an AI-based decision support system (DSS) that can suggest medication doses, supporting a clinician to make a decision about medical care.

The research

The team at Imperial College London was the first to develop an algorithm (the AI Clinician) that provides suggested doses of intravenous fluids and vasopressors in sepsis. This demonstrator project is investigating how to assure the safety of an AI-based DSS for sepsis treatment in intensive care. Through this, it will help to establish general regulatory requirements for AI-based DSS.

The project is structured around three key objectives:

  1. Review regulatory requirements in the UK and the USA
  2. Define the required behaviour of the AI-based DSS for sepsis treatment
  3. Deploy and test the DSS in pre-clinical safe settings

The progress

The team defined five scenarios that correspond to likely unsafe decisions and compared the performance of the AI and human clinicians in these situations. They also mapped the AMLAS (Assurance of Machine Learning in Autonomous Systems) framework onto the AI Clinician application.

They then demonstrated how human expert knowledge could be leveraged to define safety rules and how those rules could help assess and improve the safety of AI-based clinical DSS. They compared how often and under what circumstances human clinicians and the AI Clinician would have broken a number of ICU expert-defined safety rules. They also improved the AI agent by modifying the reward signal during the training phase and added intermediate negative rewards each time those safety rules were breached. The team's work demonstrated that the newly trained model was even safer than the initial AI Clinician with respect to the considered safety scenarios. 

The team has ran a simulation study in a live high-fidelity ICU simulation suite, which tests the behaviour of 40 human doctors of various levels of seniority when presented with an AI-based clinical support system. They test what factors influence doctors to trust or question the suggestions made by an AI in such an environment. Human factor experts Professor Peter Buckle and Dr Massimo Micocci from Imperial College London were involved in the design of the protocol.

The ethical implications of using AI in healthcare are being considered in collaboration with ethics specialist Michael McAuley, with a particular focus on the link between ethical implications and levels of autonomy of a system.

A clinician standing by a patient who is in a hospital bed

Rewards and reprimands: teaching a medical AI to be safer

How multidisciplinary collaboration is improving the safety of an AI-based clinical decision support system

Blog post by Paul Festor and Matthieu Komorowski, Imperial College London

Papers and presentations

Project partners

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH