Skip to content Accessibility statement
Home>Centre for Assuring Autonomy>Impact>Sean White NHS Digital case study

AI could transform healthcare, complementing the work of skilled nurses and clinicians, but we need to build trust in the technology being developed. That starts by increasing the skills and knowledge of the teams that the autonomous systems will work alongside.

Sean White is a Safety Engineering Manager within the NHS Digital Clinical Safety Team. He has worked with the Assuring Autonomy team for many years, aiming to develop and grow his team's knowledge and experience with AI and autonomy.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

In the last four to five years AI as a concept has exploded across society. Its benefits are known, but there is also a great deal of misunderstanding about it – a belief that the technology is running wild, changing its decisions and being unpredictable in its behaviours. We need to correct this perception and assure society that it is safe to exploit this technology. I also realised that as a team we needed to develop and grow our knowledge and experience with this technology to support our national safety assurance capability. Working with AAIP would help address some of these opportunities and challenges.

Developing bespoke CPD

We started the development of our joint training with a pilot one-day course delivered by the AAIP team to 100 NHS Digital delegates. This explored what AI is and how it can be used and assured from a healthcare perspective. It was targeted at clinicians, safety engineers, developers, project managers, and people allied to the role of ‘clinical safety officer’. Feedback was strong so we developed a deeper course. 

This [new CPD] used AMLAS as a specific assurance model, healthcare scenarios and included an in-depth case study. This was purposely delivered to a small community of 25 external delegates including clinical safety officers, clinicians, and developers. The course was fully subscribed, and participants gained a greater understanding of how to effectively assure AI for use in a care pathway. The delegates were ‘close to the use of the technology or product’ and would be able to influence and impact others around them. We ran this course twice and feedback from learners was that they had a stronger understanding of the state of the art; it corrected misperceptions about AI; and it encouraged them to think differently about how to assure its safe deployment. Using AMLAS in the training helped learners appreciate the need to be systematic in their approach to assurance.

The importance of collaboration

As well as the CPD, we have collaborated with Sean and his team on other areas of work, including demonstrator projects and developing and delivering ‘AI in the NHS’ national conferences in 2019, 2020 and 2022 to bring together manufacturers, human factors specialists, health organisations, and regulators all looking from different perspectives at how to use AI safely. Each area of work always takes a multidisciplinary approach, which helps Sean bring other other aspects of assurance into focus.

They [AAIP] provided the opportunity and funding that enabled me to ringfence resource to focus on our research and education objectives, but more than that, they give us access to experts – including Ibrahim and Richard whose infectious enthusiasm, and championing of healthcare and safety, encourages success.

A look to the future

Our partnership with Sean and the NHS Digital team continues and we will run more training in 2023 and beyond. We are also discussing the development of additional training, such as an introduction to AI e-learning package which could be made accessible to all working in the NHS. 

I would like to see understanding of AI grow and skills develop in effective assurance, the AAIP gives us the resources and knowledge to achieve this. The opportunities are immense for AI in healthcare. It could help with early screening and triaging, bedside monitoring and care, indeed any labour-intensive tasks with the benefit of staff being released to conduct higher skilled work. AI is never going to replace nurses and clinicians, rather supplement or complement what they’re doing to deliver care to patients, but we need to build trust in AI and there’s still a long way to go.

Find out more

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH