Moving on from prediction: the safe use of AI in making medical decisions about sepsis treatment.
Many AI-based healthcare systems have already been approved for clinical use (e.g. by the FDA). But these mainly focus on replicating predictive tasks usually performed by humans, such as classifying skin lesions or predicting renal failure.
The challenge is in developing an AI-based decision support system (DSS) that can suggest medication doses, supporting a clinician to make a decision about medical care.
The team at Imperial College London was the first to develop an algorithm (the AI Clinician) that provides suggested doses of intravenous fluids and vasopressors in sepsis. This demonstrator project is investigating how to assure the safety of an AI-based DSS for sepsis treatment in intensive care. Through this, it will help to establish general regulatory requirements for AI-based DSS.
The project is structured around three key objectives:
- Review regulatory requirements in the UK and the USA
- Define the required behaviour of the AI-based DSS for sepsis treatment
- Deploy and test the DSS in pre-clinical safe settings
The team has completed a review of the regulatory background for AI-based medical devices and submitted this for publication. They have mapped the AMLAS (Assurance of Machine Learning in Autonomous Systems) framework onto the AI Clinician application, to highlight the most important safety components of such a system and direct the generation of safety assurance evidence to define the desired behaviour of the AI Clinician.
They have defined five scenarios which correspond to likely unsafe decisions and compared the performance of the AI and human clinicians in these situations. The output of this analysis was fed back into the model design in the form of hard encoded rules, to further improve its safety.
Papers and presentations
- Jia, Y., Lawton, T., Burden, J., McDermid, J., and Habli, I. "Safety-driven design of machine learning for sepsis treatment" in Journal of Biomedical Informatics, March 2021
- McDermid, J., Jia, Y., Porter, Z., and Habli, I. "AI explainability: the technical and ethical dimensions" in Philosophical Transactions.
- Jia, Y., McDermid, J., and Habli, I "Enhancing the value of counterfactual explanations for deep learning" in AIME 2021: Artificial Intelligence in Medicine in Europe
- Panel discussion in an AI Med (Artificial Intelligence in Medicine) “Clinician Series” webinar, 31 March 2021.