Creating resources to help manufacturers and others to meet the regulatory requirements for their machine learning healthcare tools.

The challenge

While healthcare standards and regulations are in place, challenges lie in the suitability of these to provide effective oversight of systems and tools that make use of artificial intelligence (AI) and machine learning (ML).

To assure the safety of AI and ML tools in healthcare a whole-system approach is needed, taking into account the role of developers and manufacturers, regulators, hospital trusts, patients, and others. The challenge lies in being able to develop a safety assurance framework that is informed by the needs on the ground, works in practice, and would lead to the necessary changes to policy and practice.

The research

This project will help to establish a safety assurance framework to support healthcare manufacturers and deploying organisations to:

  • assure their ML-based healthcare technology
  • meet their regulatory requirements

The work is underpinned by the AAIP assurance of machine learning for use in autonomous systems (AMLAS) process. It will answer three key research questions:

  1. What published literature exists that aligns with the requirements of AMLAS to support the successful regulation of ML technologies in healthcare?
  2. Does a specific instance of the AMLAS need to be established for healthcare?
  3. Can AMLAS be applied in practical ML-enabled healthcare systems and support compliance with the associated regulation?

The project aims to publish outputs as guidance for regulatory bodies, manufacturers, and user communities, to support their implementation and regulation of ML systems, and to influence the development of future regulations.

The progress

An initial literature review is well advanced, with initial conclusions illustrating that: 

  1. there is significant overlap in the key risk management standards that govern the healthcare domain, providing an opportunity to develop assurance artefacts that can sit across regulatory requirements
  2. there is little published literature supporting effective assurance of AI enabled health care products
  3. whilst there are many standards related to digital health technology there is no published standard addressing specific assurance considerations

Work has been conducted to introduce the project and the concept of AI and its assurance challenges into the Clinical Safety Community of Interest (CSCIO) training syllabus run by NHS Digital. The first delivery was in March 2021 and received very positive feedback.

Project partners

Contact us

Assuring Autonomy International Programme
assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Department of Computer Science, Deramore Lane, University of York, York YO10 5GH

Contact us

Assuring Autonomy International Programme
assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Department of Computer Science, Deramore Lane, University of York, York YO10 5GH