NHS Digital Health Safety Conference 2019

News | Posted on Friday 5 April 2019

The 2019 NHS Digital Health Safety Conference at the University of York: a beautiful city, 100 great minds and a whole day dedicated to discussing the safe introduction of new AI-based digital health technologies.

Digital health safety and AI: now and the future

Dr Ibrahim Habli


Artificial intelligence (AI) will bring us numerous health benefits: it could help us address the shortage of clinicians and increase access to healthcare in poorer parts of the world. But what does it mean for AI to be safe and what is safe enough?

That was just part of the discussion we had at the NHS Digital Health Safety Conference at the University of York on 27 March 2019: a beautiful city, 100 great minds and a whole day dedicated to discussing the safe introduction of new AI-based digital health technologies.

Farah Magrabi (Macquarie University) started by urging us to ask the right questions when evaluating AI safety. She outlined safety as a system property - it must be evaluated as an agent on its own, in the hands of the users and in the context of its use. In healthcare, AI systems may: 

  • work alone (potential risks associated with knowledge deficiencies, and a mismatch between training and operational data)
  • work with a doctor or clinician (potential risks with automation induced complacency where the clinician assumes the AI will do the right thing)
  • work in complex socio-technical clinical environments

Matthew Cooke (NHS Improvements) introduced the clinician’s perspective on the challenges of AI-based technologies. Matthew introduced the idea of whether we should, when regulating AI, view it as a medical device or as a doctor. Should it be regulated in a way consistent with clinical practitioners?

Harold Thimbleby (Swansea University) outlined three steps to achieving digital health safety:

  1. better regulation
  2. digital safety ratings (making safety visible)
  3. digital qualifications (clinical teams need digital literacy and qualifications to understand technology before it’s used)

Two technology firms introduced us to new AI technologies to support healthcare: Babylon Health and Medopad. David Grainger from the Medicines and Healthcare products Regulatory Agency (MHRA) highlighted how the new software regulations will increase rigour in the approval process of software as a medical device.

Richard Hawkins, from the Assuring Autonomy International Programme (AAIP), outlined a method for assuring machine learning. Using information from existing research studies he and others in the AAIP team have developed a pattern for arguing the assurance of machine learning in medical diagnosis systems.

Neils Peek (University of Manchester) highlighted the importance of the learning health system paradigm in its ability to blend data science, improvement science, and technology. The final presentation was from Yan Jia (University of York), who outlined how to assure the safety of medication management using Bayesian networks as part of an overall safety case.

The day finished with a panel discussion with some of the day’s presenters. When they were asked to provide the audience with one single step that must be taken to assure the safety of AI in healthcare, the panel offered complementary views, with two main suggestions that gathered interest from the audience:

  • development of an international expert panel with representatives from industry, NHS, academia and patient groups
  • agile regulation - Matthew Cooke put forward a suggestion of agile regulation. Technology develops rapidly but regulation can be slow to appear, restricting how quickly technologies can be introduced and the benefits realised. Conversely, so called agile regulation would develop more quickly, in line with technology developments, enabling their safe introduction and adoption. How do we credibly develop this? Do we need an initial set of regulations in place with a mechanism for being able to quickly update them?

This year's NHS Digital Health Safety Conference was a great opportunity for clinicians, pharmacists, medical directors, industry, researchers and others to continue the dialogue about how we could safely introduce AI to our health care environment. We are looking forward to a similar event next year, with patients as a core part of the panel, and talks to help us identify their questions, fears, and enthusiasm.

Download resources and presentations

(Resources links are on an external website)