Removing the cage and curtains: how can we assure the safety of cobots to support increased productivity in manufacturing?

The collaborative robotics market is expected to exceed $9.0B by 2025. However, safety and trust issues are hindering deployment in collaborative manufacturing processes: international standards for safe human-robot collaboration are in their infancy (ISO/TS 15066), difficult to enforce in practice, and so manufacturers are falling back on non-collaborative segregation-based safety methods including caging and light curtains. This substantially reduces the benefit of collaborative robotic systems. 

This project will demonstrate how novel safety techniques can be applied to build confidence in the deployment of uncaged collaborative robot systems operating in spaces shared with users. Existing collaborative processes provided by the project's industrial partners will act as case studies and demonstrators. These vary in complexity, but are suitably constrained in that they provide a tractable safety problem whilst providing a good representation of current industry applications and needs. To make the issue tractable, research will address specific safety elements related to the key partner-identified issues of:

  • volumetric sensing
  • security
  • system testing
  • safety operation-mode switching

This will result in evidence to support the assurance of general collaborative robot systems to support further deployment of collaborative robots in manufacturing. Regulators will be involved to advise on compliance with evolving standards.

Project progress

In the first phase of the project, the team has been engaging with stakeholders (industrial engineers and regulators) to better understand the safety requirements, concerns, desires, and barriers to adoption of collaborative robots.

They have conducted a series of interviews to shape participatory design workshops, which will enable research staff to work with a variety of stakeholders in industry (including operators and health and safety personnel) to explore how to not only make uncaged robots safe, but also inspire confidence in their safety.

In parallel, work has progressed on creating a digital twin of an industrial robot cell to use as a testbed for the planned research and facilitate deployment on the real robot system. This uses the Robot Operating System and Unity gaming engine to provide communications with, and visualisation of, the various sensors, robots, and potentially people, in the environment.

Work has also begun on specifying sensing systems and approaches, and on devising an end-to-end methodology for the synthesis of risk-aware controllers for dynamic operating mode switching, which will enable the system to dynamically change its safety settings in response to detected events.

In the second quarter of the project, the team has constructed a lab-based replica of a collaborative robotic manufacturing cell, enabling them to deploy and test sensors, and train machine learning algorithms to detect and track people and robots working in the cell.

The team are developing a threat model for the security of collaborative robots and are also developing methods for synthesising safety controllers based on detected safety and security events.

An early outcome from the project is the development of a methodology for supporting the co-design of safe collaborative robot processes using a card-based activity that prompts participants to consider the implications of changes to safety and security systems. The team has tested the methodology in one-on-one sessions with stakeholders and will be rolling this out as part of wider co-design workshops with partners in the coming months. This methodology will help to identify what safety and security techniques are important to stakeholders, but will also serve to engage them in the design process and build confidence in the systems developed.

Whilst COVID-19 has created barriers to working in labs and factories on physical robotic systems, work has continued to deliver digital and theoretical methods for collaborative robot safety. This has included:

  • continued development of the digital twinning environment, including identification of a full list of required functionality to support the ongoing research activities. A beta version is currently being used and extended by the project team, with a full release expected in the near future
  • training of deep learning methods to visually track humans, robots, and other dynamic objects, and identify potential collisions
  • definition of safety monitors, and their interaction with the digital twin, to capture situations which may lead to the occurrence of a hazard (as identified in earlier safety analysis work)
  • cyber security threat models and policies for collaborative robots
  • a preliminary method for deriving safety controllers for collaborative robots that appropriately switch the safety mode of the robot in response to identified hazards

With laboratories reopening, the team are looking forward to implementing these ideas on physical robot platforms in the near future, and collecting the necessary data to train and test their approaches.

Presentations and papers

Project team

  • The University of Sheffield
  • The University of York