1.1 Identifying hazards

Practical guidance - healthcare

Author: SAM demonstrator project

Identifying hazards is an important function in the design and safety assurance of robotics and autonomous systems (RAS). This function will be variable in its level of success. Some of the drivers for this performance variability will be intrinsic to the function (e.g. the experience of the analyst performing the function and the method they choose to use), some will be extrinsic to the function (e.g. the time and resources available for activities to identify hazards), and some will be functionally coupled to wider functions upstream and downstream in the system (e.g. the analyst might have done a similar project before, this enhances their choice of method and the hazards they “see” for this project, this sparks more grounded debate and ideas with the subject matter experts and engineers, which leads to design improvements).

Here, we expand the notion of “identifying hazards” not just as a technical issue that focuses on the mechanical application of methods, but a socio-technical issue that includes the skills, knowledge and experience of the analyst, who the rest of the team are and how they are involved, the processes that are followed, time and resources allowed, and the concepts and theory that guides thinking. As we see below, these drivers can be mapped so we have a better idea of what makes the performance of “identifying hazards” flourish rather than stall. 

  
Scope of analysis Identifying hazards for RAS in real-world settings can be complex. In such cases simplifying assumptions might be made about working practices and the scope of analysis. However, a study focused on the technology and the primary task would give a quite different perspective compared to a study focused on the context (e.g. clinical pathway) and primary and secondary tasks and broader related activities.     
Granularity of analysis Time, resources and perspective can also affect the granularity of the analysis. There is a trade-off between the efforts one expends and the value one gets back, presumably with diminishing returns. However, some subtle interactions and unintended consequences might only reveal themselves at a fine-grained level of detail.       
Experience of analyst The experience of the analyst leading the hazard identification exercise will have a significant effect on how it is organised, who is involved and what processes are followed. The analyst might also have specific skills and knowledge to enlighten the hazard analysis.  
Engagement with subject matter experts (SME) and stakeholders The analyst will only be able to “see” so much. SME’s and stakeholders need to be engaged effectively to bring their knowledge, experience and insight to enlighten the hazard analysis. Who is involved and how they are engaged will influence success.
Representations Communicating how the task is currently done, and how the task might be reconfigured with a RAS, can be complex. Different representations can be used (e.g. process maps, task analyses and functional diagrams). Pictures and diagrams might also convey issues to do with the context, layout and interface design. All of these representations have strengths and limitations, they will shape the sort of dialogue and feedback that can be achieved with SME’s and stakeholders. 
Concepts, theory and guidewords Different approaches and methods will have different concepts, theory and guidewords that will shape thought and dialogue. For example, more traditional engineering-based approaches might focus on technical issues, whereas human factors approaches might more readily draw attention to issues of situation awareness and attention. Methods focusing on a single task might miss issues with important goal conflicts and trade-offs between activities. Methods focused on failure might miss important resilience mechanisms that help to create safety.


Indeed, there is some suggestion from recently literature that to ensure system safety we must not only attend to identifying hazards and reducing risks following the ALARP principle (Safety-I), but that we must also understand the (sometimes hidden and implicit) positive behaviours that create safety (Safety-II). We must have a good understanding about how safety is normally created in everyday work, otherwise the introduction of RAS might inadvertently erode resilience behaviours. For example, the official view of the system might be clear that verbal medication orders should not be taken and medication prescriptions should always be complete, however enforcing these things could lead to delayed medication, workarounds, non-compliance and disuse. Sometimes seemingly erroneous behaviour is practiced to keep the system safe.

Identifying hazards will not be perfect and factors driving its performance need to be understood.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Department of Computer Science, Deramore Lane, University of York, York YO10 5GH

Related links

Download this guidance as a PDF:

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Department of Computer Science, Deramore Lane, University of York, York YO10 5GH

Related links

Download this guidance as a PDF: