1.2.1 Considering human/machine interactions

Practical guidance - cobot (collaborative robots)

Author: Dr Benjamin Lesage, University of York

Collaborative robots (cobots) [1, 2] are designed to operate alongside human operators in manufacturing environments, where they contribute to processing components or assemblies. Cobots offer the opportunity to reduce the exposition of operators to potentially harmful environments, such as high level of noises, sustained vibrations, high-temperature gradients, or volatile materials. Human operators can in comparison deal with complex and dynamic assembly processes.

The assurance objectives require the identification of situations where interactions between cobot and human operators may cause hazards. Current practices for the safety of the operator include the use of cages to segregate the robot from the human operators [3], thus reducing the range of possible interactions. Requirements on the cobot environment, in particular, more complex sensing and control operations, can help further leverage these constraints and remove the barriers.

The identification of hazardous situations should account for the specifics of cobots. Cobots are characterised by their proximity to, and their frequent interactions with, the human operators in order to accomplish their work. The resulting workspace is a shared, dynamic environment to which the cobot should adapt to ensure the safety of the operator. Flexibility is also required to allow the cobot to adapt to task changes or variations in its setup. Passive safety features (e.g. restricted maximum payload or velocity) can support a safe operation but should not hinder the capability of the cobot to perform highly sensitive operations.

Summary of approach

  • The analysis process for a cobot environment is inherently iterative, and it needs to be refined as the system matures with the inclusion of new components, assumptions, or safety measures.
  • The identification of hazardous behaviours should cover external safety requirements such as the operational requirements of the implemented manufacturing process, or the Cobot passive safety parameters.
  • Human operators should be considered as part of the control structure to account for training requirements and identify the required feedback from and to the Cobots for safe operation.

Applying STPA process

STPA (Systems-Theoretic Processes Analysis) [4], introduced in Section 1.2 of the Body of Knowledge, is a technique for the identification of hazards and unsafe scenarios in a system. STPA regards the system as a whole and acknowledges the existence of emergent properties from the interactions of different components, including human operators. Controlling these emergent properties requires the consideration and control of individual components but also their interactions. There is thus a good match between the assurance objectives and the outcomes of the STPA technique. Below we describe the results of applying the STPA process to the analysis of a generic cobot setup, to define a safe cooperative environment. The main steps of STPA are:

  1. Define the fundamentals of the analysis and the boundaries of the system. Cobot-specific hazards relate to injury to human operators, damage to the equipment and assemblies, and prolonged interruption of activity. The definition of hazards further supports the identification of the boundaries of the system. The control structure captures the interactions between different components within the system boundaries, including notably the operator, cobot, and controlled manufacturing process. An example control structure diagram for a cobot environment (in Figure 1) considers a generic system where an operator hands over a component for processing [5]. A control structure of this form could be used to support the analysis of any Cobot system.
  2. Identify the potentially unsafe control actions. Hazards in STPA stem from the execution of Unsafe Control Actions (UCAs), the context or system state under which a control action might lead to a hazard. Control actions represent purposeful decisions and responsibility in the control structure. Example UCAs for a cobot system are provided in the example below.
  3. Determine the causal factors for each unsafe control action. The technique defines a number of hints which might explain the cause of UCA [6], similarly to the occurrence of UCA themselves. The specific causal factors are highly dependent on the technology adopted and the particular context. As an example, the cobot moving to position when an obstruction exists might stem from inadequate or missing feedback in the absence of sensors, or incorrect feedback if the sensors are faulty. The operator providing damaged or inappropriate components might stem from a miscommunication of the required work or a lack of training for a new operator.

 

Figure 1

Example application of STPA to cobot

The results of applying STPA to an example cobot system can be seen here: Results of applying STPA to an example cobot system (PDF , 468kb).

As an example, we consider the control action “grab component”, from the cobot controller to the cobot in Figure 1. It is responsible for ensuring that the Cobot effector picks up and secures a component for processing. It is subject to safety constraint SC9 in our analysis (Results of applying STPA to an example cobot system (PDF , 468kb)): “Components should be secured during transport, processing, and handover”. Considering the action in different contexts can lead to the following UCA:

  • UCA7-N-1 (Not Providing) “The cobot does not grab the component provided by the operator when it is in handover position and available” might lead to a stall in the production line (Hazard H7);
  • UCA7-P-1 (Providing) “The cobot grabs the component while it has a high velocity” might lead to a violation of minimum separation requirements (Hazard H1);
  • UCA7-T-1 (Scheduling) “The cobot grabs a component before it has been released by the operator” places the operator in a dangerous area (Hazard H2);
  • UCA8-D-1 “The cobot releases a component too early during handover before it is secured” implies the component is not secured during operation (Hazard H6).

The identification of causal factors for UCA8-D-1 yields different possible scenarios, as opposed to system states, which might explain the occurrence of the UCA. The cobot may be unable to secure a component (HCF7-D-1-1) if it is too heavy, has the wrong shape, or is slippery. The configuration of the cobot might not include identifying if a component is secure (HCF7-D-1-2) or assume different types of components (HCF7-D-1-5).

Control actions originating from human operators in the control structure can be analysed following the same approach. Not providing training to a specialised operator (UCA1-N-1) can lead to a variety of hazards, notably because of the operator not recognising the cues provided by the cobot or the safe areas (H2). A possible cause, HCF1-N-1-2, is “The need for training has not been identified”, as can occur for a new starter, following a new assignment, or in the absence of a training record.

Summary of findings

From our initial experience with the application of STPA to a generic cobot control structure, the STPA process is well suited to the analysis of cobot systems. STPA highlighted a number of unsafe actions beyond contacts between a cobot and an operator. The iterative nature of the analysis provides for refinements alongside the system definition (e.g. faults resulting from sensors and actuators added into the system). We raise a number of considerations for the application of STPA:

  • The technique recommends a limited number of hazards, defined as system states which could lead to losses. We observed those recommendations tend to lead to abstract high-level hazards which are difficult to relate to actual safety constraints and need additional refinements. The major losses in the system might also be under-represented in the captured hazards.
  • There is no criterion on the completeness of the identified UCA or causal factors. The taxonomies proposed by STPA support considering each UCA under different aspects to identify its occurrence and causes. Not all categories might be suited for a given UCA, and a same category might lead to varied scenarios; deriving then reviewing UCA for each combination leads to an explosion of the number of actions and causes [7].
  • The distinction between the context defined for a UCA and its cause is subtle. The context of a UCA does not include a belief on or the perceived state of the system, nor does it consider its future outcomes on other processes other than the occurrence of hazards.
  • STPA captures the system definition from the safety engineer point of view. The increase in complexity within the analysis is managed through the traceability between its components, from causal factor to control action, hazard and loss scenario [8]. Proper traceability should be in place so the analysis relates to design, implementation, testing, and certification aspects. This also supports continuous refinements to the analysis by capturing elements affected by a change.
  • The use of an adequate tool to support the analysis should be considered. Adequate tooling, in particular, could ease the maintenance of traceability requirements and help gauge the completeness of the analysis in particular with regards to coverage gaps. Due to the high-level definition of STPA, different tools supporting the process [9,10,11] do provide slight variations on its implementation and on the presentation of the outcomes.
  • The overarching nature of a system-based analysis might lead to a detailed control structure which is hard to maintain and analyse. A division into interconnected subsystems provides for a more tractable analysis but might omit emergent properties.
  • The analysis does support the definition of generic systems and components. However, the identified causal factors will lack precision or grounding in the absence of refinements and in particular without a definition of the controlled processes.

References

[1] Bauer, Andrea, Dirk Wollherr, and Martin Buss. "Human–robot collaboration: a survey." International Journal of Humanoid Robotics 5.01 (2008): 47-66.
[2] Yang, Guang-Zhong, et al. "The grand challenges of Science Robotics." Science robotics 3.14 (2018).
[3] Villani, Valeria, et al. "Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications." Mechatronics 55 (2018): 248-266.
[4] Leveson, Nancy G., and John P. Thomas. "STPA handbook." Cambridge, MA, USA (2018).
[5] El Zaatari, Shirine, et al. "Cobot programming for collaborative industrial tasks: an overview." Robotics and Autonomous Systems 116 (2019): 162-180.
[6] Ishimatsu, Takuto, et al. "Modeling and hazard analysis using STPA." (2010).
[7] Thomas IV, John P. Extending and automating a systems-theoretic hazard analysis for requirements generation and analysis. Diss. Massachusetts Institute of Technology, 2013.
[8] Krauss, Sven Stefan, Martin Rejzek, and Christian Hilbes. "Tool qualification considerations for tools supporting STPA." Procedia Engineering 128 (2015): 15-24.
[9] Suo, Dajing, and John Thomas. "An STPA tool." STAMP 2014 Conference at MIT. 2014.
[10] Abdulkhaleq, Asim, and Stefan Wagner. "XSTAMPP: an eXtensible STAMP platform as tool support for safety engineering." (2015).
[11] Rejzek, Martin, and Sven Stefan Krauss. "STPA based hazard and risk analysis tool SAHRA." 6th MIT STAMP Workshop, Boston, USA, 27-30 March 2017. 2017.

 

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH