Removing the cage and curtains: how can we assure the safety of cobots to support increased productivity in manufacturing?
The collaborative robotics market is expected to exceed $9.0B by 2025. However, safety and trust issues are hindering deployment in collaborative manufacturing processes: international standards for safe human-robot collaboration are in their infancy (ISO/TS 15066), difficult to enforce in practice, and so manufacturers are falling back on non-collaborative segregation-based safety methods including caging and light curtains. This substantially reduces the benefit of collaborative robotic systems.
This project will demonstrate how novel safety techniques can be applied to build confidence in the deployment of uncaged collaborative robot systems operating in spaces shared with users. Existing collaborative processes provided by the project's industrial partners will act as case studies and demonstrators. These vary in complexity, but are suitably constrained in that they provide a tractable safety problem whilst providing a good representation of current industry applications and needs. To make the issue tractable, research will address specific safety elements related to the key partner-identified issues of:
- volumetric sensing
- system testing
- safety operation-mode switching
This will result in evidence to support the assurance of general collaborative robot systems to support further deployment of collaborative robots in manufacturing. Regulators will be involved to advise on compliance with evolving standards.
In the first phase of the project, the team has been engaging with stakeholders (industrial engineers and regulators) to better understand the safety requirements, concerns, desires, and barriers to adoption of collaborative robots.
They have conducted a series of interviews to shape participatory design workshops, which will enable research staff to work with a variety of stakeholders in industry (including operators and health and safety personnel) to explore how to not only make uncaged robots safe, but also inspire confidence in their safety.
In parallel, work has progressed on creating a digital twin of an industrial robot cell to use as a testbed for the planned research and facilitate deployment on the real robot system. This uses the Robot Operating System and Unity gaming engine to provide communications with, and visualisation of, the various sensors, robots, and potentially people, in the environment.
Work has also begun on specifying sensing systems and approaches, and on devising an end-to-end methodology for the synthesis of risk-aware controllers for dynamic operating mode switching, which will enable the system to dynamically change its safety settings in response to detected events.
In the second quarter of the project, the team has constructed a lab-based replica of a collaborative robotic manufacturing cell, enabling them to deploy and test sensors, and train machine learning algorithms to detect and track people and robots working in the cell.
The team are developing a threat model for the security of collaborative robots and are also developing methods for synthesising safety controllers based on detected safety and security events.
An early outcome from the project is the development of a methodology for supporting the co-design of safe collaborative robot processes using a card-based activity that prompts participants to consider the implications of changes to safety and security systems. The team has tested the methodology in one-on-one sessions with stakeholders and will be rolling this out as part of wider co-design workshops with partners in the coming months. This methodology will help to identify what safety and security techniques are important to stakeholders, but will also serve to engage them in the design process and build confidence in the systems developed.
Whilst COVID-19 has created barriers to working in labs and factories on physical robotic systems, work has continued to deliver digital and theoretical methods for collaborative robot safety. This has included:
- continued development of the digital twinning environment, including identification of a full list of required functionality to support the ongoing research activities. A beta version is currently being used and extended by the project team, with a full release expected in the near future
- training of deep learning methods to visually track humans, robots, and other dynamic objects, and identify potential collisions
- definition of safety monitors, and their interaction with the digital twin, to capture situations which may lead to the occurrence of a hazard (as identified in earlier safety analysis work)
- cyber security threat models and policies for collaborative robots
- a preliminary method for deriving safety controllers for collaborative robots that appropriately switch the safety mode of the robot in response to identified hazards
With laboratories reopening, the team are looking forward to implementing these ideas on physical robot platforms in the near future and collecting the necessary data to train and test their approaches.
Most recently the team has begun to bring together the research strands of the project and integrate the methods and techniques within their digital twinning environment. This has required further development of the communications infrastructure to support the requirements of the various research tasks. With this in place they are preparing to conduct testing of the hazard analysis framework and safety controller elements, and of the environment itself, in early 2021. In related research projects, the team is working on connecting the digital twinning environment to physical robot and visualisation systems to enable practical demonstration in an industrial environment.
They have continued to refine their vision-based sensing systems to improve their accuracy and identify potential collisions between humans and the robot by implementing bounding boxes around detected entities. With limited networking and security risks identified in the existing case study, security work has focussed on surveying and categorising attacks and vulnerabilities across a wider range of robotic and networked systems and development of a more generalisable security policy.
the team's analysis of safety-security co-assurance challenges for collaborative robots has been accepted for publication within a book on Industrial Human-Robot Collaboration.
Presentations and papers
- Gleirscher, M. "Yap: Tool Support for Deriving Safety Controllers from Hazard Analysis and Risk Assessments" in Luckuck, M. & Farrell, M. (Eds.), Formal Methods for Autonomous Systems (FMAS), 2nd Workshop, Electronic Proceedings in Theoretical Computer Science, 329, 31-47. Open Publishing Association, 2020.
- Gleirscher, M., Johnson, N., Karachristou, P., Calinescu, R., Law, J., and Clark, J. "Challenges in the Safety-Security Co-Assurance of Collaborative Industrial Robots". To appear in Industrial Human-Robot Collaboration, edited by S. Fletcher and I. Ferreira.
- Gleirscher, M. and Calinescu, R. "Safety controller synthesis for collaborative robots" in Engineering of Complex Computer Systems, 25th International Conference, 28 - 31 October 2020, Singapore, 2020.
- Foster, S., Gleirscher, M. and Calinescu, R. "Towards deductive verification of control algorithms for autonomous marine vehicles" in Engineering of Complex Computer Systems, 25th International Conference, Singapore, 2020.
- The University of Sheffield
- The University of York