Skip to content Accessibility statement
Home>Centre for Assuring Autonomy>Impact>Shakir Laher case study

Shakir Laher was a safety engineer at NHS Digital, before moving recently to the Alan Turing Institute as a research application manager. He has experience developing software and of research projects in AI/ML assurance focused on safety.

Shakir’s link with the University of York started as a Master's student, working with Professor Ibrahim Habli. Shakir was aware of the University’s quality credentials and the level of expertise of key personnel that had driven safety thinking over decades.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

Some of the practices in York are the de facto standard e.g. Goal Structuring Notation (GSN) used in industry to construct safety arguments.

Working with AAIP

NHS Digital has a remit to ensure technology is integrated, deployed and used safely to treat patients. Shakir was part of a demonstrator project called Safety Assurance FRamework for Machine Learning in the Healthcare Domain (SAFR), working with the British Standards Institution and Human Factors Everywhere to establish a safety assurance framework to support healthcare manufacturers and deploying organisations to assure their ML-based healthcare technology and meet their regulatory requirements.

We engaged in a process of reviewing the draft AMLAS guidance through a series of workshops with two manufacturers who brought invaluable insights based on their hands-on use of the technology... Consequently, we published a review paper with a recommendation that AMLAS was fit for purpose as a safety assurance methodology when applied to healthcare ML technologies, although the development of healthcare specific supplementary guidance would benefit those implementing the methodology.

Adapting AMLAS

Through the project, Shakir is extending the AMLAS guidance for ‘adopters’ (e.g. clinicians) of ML so that they can assure the safety of the technologies in their clinical pathways. The supplementary guidance will be hosted on the national NHS Digital website.

We might have got to a similar process to AMLAS based on existing literature relating to life cycle safety assurance, but we never had the dedicated resource to achieve it in the same timeframe. Where the credit is due to the AAIP is that they tell you ‘how to do’ not just ‘what you need to do.’ The expertise in York, the level of detail, and depth is incomparable to anywhere else. If the AAIP wasn’t there you would immediately miss the intelligence – the high-level interactions and intellectual conversations that can help you with your chain of thought and put you straight.

Find out more

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH