Creating resources to help manufacturers and others to meet the regulatory requirements for their machine learning healthcare tools.

While healthcare standards and regulations are in place, challenges lie in the suitability of these to provide effective oversight of systems and tools that make use of artificial intelligence (AI) and machine learning (ML).

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

Draft standard consultation

The development of BSI British standard: BS 30440 validation framework for the use of AI in healthcare was supported by the project team and features AMLAS. It's out for consultation until 5 December 2022.

BSI British standard: BS 30440

The challenge

To assure the safety of AI and ML tools in healthcare a whole-system approach is needed, taking into account the role of developers and manufacturers, regulators, hospital trusts, patients, and others. The challenge lies in being able to develop a safety assurance framework that is informed by the needs on the ground, works in practice, and would lead to the necessary changes to policy and practice.

The research

This project will help to establish a safety assurance framework to support healthcare manufacturers and deploying organisations to:

  • assure their ML-based healthcare technology
  • meet their regulatory requirements

The work is underpinned by the AAIP assurance of machine learning for use in autonomous systems (AMLAS) process. It will answer three key research questions:

  1. What published literature exists that aligns with the requirements of AMLAS to support the successful regulation of ML technologies in healthcare?
  2. Does a specific instance of the AMLAS need to be established for healthcare?
  3. Can AMLAS be applied in practical ML-enabled healthcare systems and support compliance with the associated regulation?

The project aims to publish outputs as guidance for regulatory bodies, manufacturers, and user communities, to support their implementation and regulation of ML systems, and to influence the development of future regulations.

The progress

An initial literature review is well advanced, with initial conclusions illustrating that: 

  1. there is significant overlap in the key risk management standards that govern the healthcare domain, providing an opportunity to develop assurance artefacts that can sit across regulatory requirements
  2. there is little published literature supporting effective assurance of AI-enabled healthcare products
  3. whilst there are many standards related to digital health technology there is no published standard addressing specific assurance considerations

Work was conducted to introduce the project and the concept of AI and its assurance challenges into the Clinical Safety Community of Interest (CSCIO) training syllabus run by NHS Digital. 

Project partners have appraised the AMLAS methodology, evaluating its suitability from the perspective of assuring AI-enabled healthcare products. Principal conclusions are that it aligns closely with actual practice, and whilst there are some healthcare-specific considerations that need further elaboration this can be achieved by supplementary guidance rather than a bespoke, standalone methodology. 

The current focus of the work is to develop “deployment” patterns to supplement AMLAS. These patterns will be evaluated through a series of workshops conducted in the context of the national breast screening programme with the use of AI as a second mammogram reader. 

The team are also developing and delivering training and conference events that have been well-received across the digital clinical safety domain.

As part of the project, a new report has been published by BSI that gives a comprehensive analysis of current and relevant standards in health software. There is a particular focus on those standards that relate to AI/ML that may be utilised for safety assurance processes. The report provides an overview of AI-specific standards in development that will be essential tools for compliance with upcoming regulations and patient safety obligations.

Read the Standards Landscape Report

Papers

  • Laher, S., Brackstone, C., Reis, S., Nguyen, A., White, S., and Habli, I. "Review of the AMLAS methodology for application in healthcare". The paper is on Arxiv. September 2022.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH