Accessibility statement

Researchers to develop framework to determine legal and moral responsibilities for autonomous systems

Posted on 24 November 2021

University of York academics have been awarded £700,000 to develop a legal and moral framework for tracing and allocating responsibility behind autonomous systems, such as driverless cars or medical systems.

An illustration of Autonomous DrivingThe project will seek to establish who is responsible for the decisions and outcomes of autonomous systems. Illustration credit: Doug James at Mode for Assuring Autonomy International Programme.

The project brings together computer scientists, engineers, developers, lawyers, regulators and philosophers, as well as the general public, to establish who is responsible for the decisions behind Artificial Intelligence (AI).

The project will ask the question: when an autonomous system takes an action that affects you, how do we establish who is responsible for the action and its outcome? 

The project, funded by UK Research and Innovation, will look to address this complex problem. 

Trust

Dr Ibrahim Habli, Reader in the Department of Computer Science and Principal Investigator for the new project, said: "Currently, we have no clear or consistent answers to questions about who is responsible for the decisions taken by autonomous systems and for the impact those decisions have.

"The benefits of autonomous systems will only be harnessed if people have trust in the human processes around their design, development, and deployment. Clarity about who is responsible for the decisions and outcomes of autonomous systems, and when and why they are responsible, is critical to an ecosystem of trust in these new technologies."

Assurance

The project, which runs for 30 months, will have three levels: conceptual, assurance, and practical. 

The initial conceptual work will bring together philosophers and lawyers to clarify the fundamental concepts of responsibility, identify the agents involved, where ‘responsibility gaps’ appear to arise, and how they can be addressed. This is particularly important given the risk of ambiguity inherent in discussions about responsibility and the need to establish a common language for all stakeholders. 

Research at the assurance level will adapt methods used in the technical assurance of high-risk systems to achieve confidence that responsibility for the systems can be traced and allocated.

Finally, this will culminate at a practical level with an implementable and systematic methodology that will enable stakeholders to show that the tracing and allocation of responsibility that has been achieved for a specific autonomous system is well-justified and complete.

Transparency

Zoë Porter, an ethicist in the Assuring Autonomy International Programme and co-investigator on the project, said: "Establishing who is responsible for the decisions and outcomes of autonomous systems is particularly difficult because when we replace a human with a decision-making machine, our traditional frameworks for attributing responsibility are disrupted.

"In addition, there are different kinds of responsibility - for example, causal, role, moral, and legal responsibility - and a complete answer to the question requires us to consider them all, and the relations between them."

The resulting methodology will cover the design and development phases as well as deployment and the investigation of accidents and incidents. In partnership with clinical, engineering and regulatory collaborators and the general public, it will be evaluated through a series of real-world case studies.

"Importantly, this is about trust and transparency not blame," says Dr Habli, "It’s inevitable that we will see occasional accidents and incidents and it’s essential that we learn from these. By using our methodology, we will be able to trace responsibility in the decisions taken that led to the incident. By investigating these incidents, we will continue to iteratively adapt our methodology, learning from experience and helping ensure that this isn't about blame but about transparency and trust for all stakeholders."

Further information:

The ‘Assuring Responsibility for Trustworthy Autonomous Systems’ project is part of the UK Research and Innovation (UKRI) Trustworthy Autonomous Systems (TAS) programme. It is funded through the UKRI Strategic Priorities Fund and delivered by the Engineering and Physical Sciences Research Council (EPSRC). The TAS programme brings together the research communities and key stakeholders to drive forward cross-disciplinary fundamental research to ensure that autonomous systems are safe, reliable, resilient, ethical and trusted.

Partner organisations for the project include the NHS, HORIBA MIRA, and self-driving car company Wayve. Experts from NASA Ames, Jaguar Land Rover and Lloyd's Register Foundation will also support the project.

The Assuring Autonomy International Programme advances the safety of robotics and autonomous systems (RAS) across the globe. It is a £12 million partnership between Lloyd’s Register Foundation and the University of York that is working with an international community of developers, regulators, researchers and others to ensure the public can benefit from the safe, assured and regulated introduction and adoption of RAS.

The project investigators are:

 

Explore more news

Media enquiries

Tom Creese
Press Officer (maternity cover)

About this research

Funded by Research and Innovation (UKRI) Trustworthy Autonomous Systems (TAS) programme, the ‘Assuring Responsibility for Trustworthy Autonomous Systems’ project will work with clinical, engineering and regulatory collaborators and the general public to determine a legal and moral framework to trace and allocate responsibility behind autonomous systems.

Explore more of our research.