£1.2 million invested in research to build trust and confidence in robotics and autonomous systems across the globe

News | Posted on Monday 1 July 2019

The Assuring Autonomy International Programme, a partnership between international charity Lloyd’s Register Foundation and the University of York, is investing more than £1.2 million into research that will advance the safety assurance of robotics and autonomous systems (RAS), while building public trust in such technology.

The funding has been awarded to five projects being undertaken by research teams across the world, working in domains such as autonomous driving, social care and manufacturing. The latest funding allocation brings the amount the Programme has invested in leading the safety assurance of RAS to almost £3 million.

“RAS technology continues to develop rapidly, but not always with safety at the forefront. We must bring regulators, policy-makers and members of the public on this journey with us else we risk isolating people from the technology,” said Professor John McDermid, Director of the Assuring Autonomy International Programme. “These projects are moving us towards the safe introduction and adoption of RAS so that everyone has justified confidence that an autonomous system’s behaviours and properties are intrinsically safe.”

Body of Knowledge

The results from these demonstrator projects will be published in the Body of Knowledge which is being developed by the Programme. The Body of Knowledge covers a set of objectives for assurance and regulation of RAS in a number of key areas, including understanding how a RAS must behave in order to be safe, how to develop a RAS that can be trusted to always behave safely, understanding and controlling unsafe behaviours that may arise, and how to gain approval to operate a RAS. Each of these projects is addressing one or more of the assurance objectives to support the Programme’s work to enable the safe introduction and adoption of RAS across the world.   

Collaboration

“The challenge of assuring the safety of autonomous systems is huge,” added Professor McDermid. “It’s cross-domain, cross-technology and cross-application. One organisation cannot do it alone. These projects are just one of the ways in which we are collaborating internationally.”

“We are also undertaking fundamental research into some of the core technical issues associated with assuring the safety of RAS, and creating a freely accessible Body of Knowledge for anyone developing autonomous technologies. The Programme is bringing partners together to solve the challenge of how to assure autonomy.”

Notes to editors:

Funded projects

Sense-Assess-Explain (SAX) (Building trust in autonomous vehicles in challenging real-world driving scenarios)

  • Partner: University of Oxford

  • Research focus: Explanations of decisions taken by autonomous machines are key to build public trust in autonomous systems. The project will build robots, or autonomous vehicles, that sense and fully understand their environment, can assess their own capabilities, and can provide causal explanations for their own decisions. The project will study the requirements of explanations for key stakeholders. These requirements inform the development of algorithms that generate causal explanations.

Automatic Testing Mechanism: Towards an NCAP-like rating for robotics and autonomous systems

  • Partner: Australian National University

  • Research focus: Developing an automatic rating system to assess the safety of RAS. The project aims to develop a user-friendly platform to test RAS, design test scenarios specific to RAS, and conduct feasibility studies on the applicability of this new technology to RAS. In the longer term, this project will form the core component of an automatic rating system to assess the safety of RAS – akin to the NCAP (New Car Assessment Programme) rating for cars.

CSI: Cobot (Confident safety integration for cobots)

  • Partners: University of Sheffield, University of York

  • Research focus: Human-robot interaction. Where collaborative robots work alongside humans in manufacturing tasks, how can operator safety be ensured and assured such that it inspires confidence in stakeholders? This project will demonstrate how novel safety techniques can be applied to build confidence in the deployment of uncaged collaborative robot systems operating in spaces shared with humans. 

Safe-SCAD (Safety of shared control in autonomous driving)

  • Partners: University of Virginia, Carnegie Mellon University, University of York

  • Research focus: How humans and machines can safely share autonomy in an autonomous vehicle. The project will look at how control can safely be handed back to a safety driver, considering challenges around situational awareness, machine learning and human monitoring of systems.

Assuring safety and social credibility (feasibility project)

  • Partner: University of Hertfordshire

  • Research focus: Social acceptability: where user acceptance of a RAS depends on effective performance of social functions (eg empathetic and socially interactive behaviour), how can potentially conflicting social and safety requirements be balanced and how can we assure that the RAS is both safe and acceptable to end users? This project will address the hypothesis that there is a link between social credibility and effective performance of safety related behaviours.