Skip to content Accessibility statement
Home>Centre for Assuring Autonomy>Impact>Tom Lawton case study

Professor Tom Lawton is a Critical Care Consultant and Head of Clinical AI at Bradford Teaching Hospitals NHS Foundation Trust. His role to help bridge academia and industry with real-world clinical perspectives is something that means he can benefit from, and contribute to, the AAIP as a Fellow and research collaborator. 

His partnership with the AAIP was forged in 2017, after initially being introduced to researchers working on complex systems at the University of York. An introduction was then made to one of the AAIP team members, Professor Ibrahim Habli, who had many shared interests with Tom.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

The timing was fortuitous. I had started working on modelling of Intensive Care Units and hospitals and together we started talking about the data coming out of the electronic patient record system in Bradford, and how we could use that data in safety assurance. This was the start of a genuine and continual collaboration.

Why collaboration is important

Tom’s work means he has very good connections in his spheres of influence, including clinicians at the hospital and with academics at the Bradford Institute for Health Research. Over time, the AAIP has helped Tom widen his network of expertise and access to people not just across the healthcare and academic sectors, but also across multiple disciplines.

Collaboration is essential. That doesn’t mean we have to work in the same way though, in fact, a vital role of the AAIP has been to create a space where people with different opinions have been able to gather together and discuss their approaches. This mitigates against silos and group think which is where the mistakes happen.

How is collaboration with the AAIP different?

Tom says that AAIP’s approach to collaboration is what makes it different from experiences he has had in other situations where analysts perhaps engage clinicians at the start of a project, conduct requirements analysis together but then go away for six months only to return with something that won’t work in real life or practice.

“Regular contact is the key; for one part of a team to drift away from the other means simple course corrections aren’t made that could have saved time and improved the outcome," he said. "AAIP has felt like a continual collaboration. For example, if one of my colleagues comes across some data they aren’t quite sure about they text me, I respond, and they feel assured rather than stuck. And vice versa. The work progresses without delay.”

Working with the AAIP has been an amazing opportunity to work with experts in other fields. The AAIP provides a link to everyone and everything you might need – the AI person, the safety engineer, the lawyer, the philosopher – all the right expertise at the right moments. This means you can get answers to questions quickly rather than being stuck.

How has the collaboration made an impact?

Tom has collaborated with the team on a portfolio of projects and papers, that look at ways that AI and humans can work together in healthcare so that both are doing what they are best at. This work includes an AI in Medicine paper, a project exploring how AI can help predict the optimal time for ventilator extubation, and a project looking at data poverty in Bradford to ensure any systems are built on data inputs that mitigate against recommendations that might inadvertently reinforce health inequalities.

There have been some site specific benefits too at the hospital where Tom reports that they get many approaches from companies wanting to sell them AI systems. “I can ask the right questions before any procurement is made," he says. "For example, ‘where’s the safety, the human factors, the legal elements?’ And if I think they don’t have the right ingredients quite yet I can encourage them to contact the AAIP. We want to ensure that when we do bring systems in we do it in a safe manner."

As a result of this work, we are getting closer to the position where some of these technologies will be seen in use. Hopefully within the next two years they will be tested in the real world, and some of the approaches being espoused by the AAIP come in earlier than that. There have been important changes in the meantime too.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH