Skip to content Accessibility statement
Home>Centre for Assuring Autonomy>Impact>HF/AI impact story

The strain on healthcare is more pronounced than ever. Every year there are more than 24 million attendances at English emergency departments and in excess of 350 million GP appointments

The use of artificial intelligence (AI) tools in healthcare could help ease the pressure, but reliable AI tools are critical for a coherent and safe healthcare environment.

How can we assure the safety of AI in such a complex and unpredictable setting? How can we embed reliable systems that are compatible with a fast-paced healthcare environment? 

The Human Factors (H/F) approach is one route to unlocking a safer world of AI in healthcare. It focuses on significant interactions between clinicians, AI tools, and the environment, considering the system as a whole, not the technology in isolation.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

The crucial point for any developer is that a human factors approach optimises the machine and the human.

Dr Mark Sujan, Managing Director - Human Factors Everywhere and AAIP Fellow and collaborator

Guiding a systems approach

An AAIP demonstrator project led by Dr Mark Sujan involved collaboration with key stakeholders, including the Chartered Institute of Ergonomics and Human Factors (CIEHF), the Australian Alliance for AI in Healthcare (AAAiH) and the Society for Health Care Innovation (SHCI).

The team developed a white paper that outlines eight principles to guide the design, development, regulation, and use of AI in healthcare: 

  1. Situation awareness 
  2. Workload 
  3. Automation bias 
  4. Explanation and trust 
  5. Human-AI teaming 
  6. Training 
  7. Relationships between staff and patients 
  8. Ethical issues 

The white paper highlights issues that developers and regulators of AI may face and offers guidance on how these challenges can be overcome.

One such challenge is how to harness the strengths of the AI tool and the human clinicians when they work together as a team. The white paper emphasises the importance of shared goals and the benefit of utilising teamwork models, such as the Big Five model of teamwork when considering the use of AI within the team. 

Three supporting mechanisms are also required to achieve successful human-AI teaming. These include shared mental models, mutual trust and closed-loop communication. The combination of key behaviours and supporting mechanisms will lead to successful human-AI teaming. 

Another challenge considered by the white paper is how AI tools may affect workload. The white paper offers guidance on how to assess and measure workload in different situations. This allows developers to assess the impact of different design options when embedding AI in clinical systems.

Influence

The publication of the white paper influenced a forthcoming standard from the British Standards Institution - BS 30440 Validation framework for the use of AI within healthcare

The British Standard 30440 aims to define a validation framework for the use of AI within healthcare. Through working with the AAIP we have ensured that human factors and ergonomics have been considered and appropriate clauses put in place. Specific references to usability, consistency, training, explanation, control as well as automation bias and over-reliance were deemed to be critical components of the standard in ensuring safe and effective AI models to be developed and deployed.

Haider Husain, Chief Operating Officer - Healthinnova Limited and Panel Chair for BS 30440 (A validation frame for the use of AI in Healthcare, British Standards Institution)

 

The CIEHF reports that the white paper has had around 2500 page visits and downloads in its first year of publication. This could influence the functionality of AI in healthcare, ensuring that AI tools will operate safely and effectively within clinical environments.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH