Skip to content Accessibility statement
Home>Centre for Assuring Autonomy>Impact>James Law Sheffield case study

Increased collaboration between humans and robots is a major opportunity for industrial manufacturing. Yet it raises complex safety issues, which are a critical barrier to adoption.

Dr James Law is a roboticist and the Director of Innovation and Knowledge Exchange at Sheffield Robotics. He led the CSI:Cobot demonstrator project

Before the project, industry were interested but did not know how to get robots working safely alongside humans. As a result, robots were placed in safety cages and with that, the potential collaborative benefits were lost. The CSI:Cobot project paved the way for the removal of some of these safety barriers. 

James and the Sheffield team recognised that by involving the relevant stakeholders at an early stage they were building that critical trust and confidence. Without it, there would likely be pushbacks in implementing these novel approaches. Ultimately securing the trust of the workforce will be critical and this has been an integral part of the work. 

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

It’s such an incredibly complex area. Industrial partners and regulators need confidence, which is really the big issue to overcome if cobots are to be deployed. Whilst Sheffield has expertise in robotics and manufacturing, we needed York’s expertise in safety assurance to tackle this challenge.

Safety assurance using digital twins

Part of the CSI:Cobot project that is particularly noteworthy, and commercial, is the use of digital twins with the potential to automate safety case generation. Digital twins provided the team with a mechanism for integrating and testing their safety and security approaches. In addition, with their regulator partner (the Health and Safety Executive), the team developed an approach to help identify hazardous occurrences using a digital twin.

A novel opportunity identified by the team was the potential to embed safety information within a digital twin. This could lead to the automation of safety analysis, which is currently done slowly and painstakingly by hand. The involvement of the regulator in this project also highlighted the valuable role digital twins could play in telling others about the technology.

Having a digital twinning system was going to be the most useful way to collaborate and test our methods offline and reduce the amount of time required for integration later on. That led us to develop, and continue to develop, a sophisticated digital twinning framework. By building safety standards into the twin, stakeholders can check how a system meets existing regulation. We have developed a theory of how this could work, and are currently creating a partial implementation.

With technological development, timing is critical. James says that without the engagement with the AAIP, “we’d have missed out on much of what has been achieved.”

There's been a lot of interest in the safety of robotic systems and the approach that AAIP has taken, and the expertise that the York team specifically bring to the space is quite unique. The other work I am aware of is largely being done by experts in robotics, and experts in robotics for manufacturing. They understand the regulations and how to build robot processes. What they don't have is the expertise in safety assurance frameworks, which is completely different. York’s team are the international experts in safety assurance, with a very different skillset, methods and approaches.

A look to the future

The learning from the CSI:Cobot project has informed a number of other projects and the collaboration has taken the team in several new directions, including a potential spin-off company pioneering the digital twin technology.

In five years, with the AAIP’s support, James hopes to be able demonstrate the real life potential of the approaches pioneered through the collaboration.

If we can commercialise the basic digital twinning framework, we can then look at working with other partners in the research domain, to bring in new services and tools, like some of the safety analysis tools that were developed in this project. Over time, I expect the digital twinning framework will provide a means of pulling through some of that other research and helping generate real-world impact too.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH