Safe autonomous driving moves a step closer thanks to new ARIA funding

News | Posted on Tuesday 8 April 2025

The CfAA has been awarded new funding to identify how Frontier AI could be safely deployed within fully autonomous driving systems.

An illustration of a car over a black background. The car is stylised as a wireframe, with half circle pulses being emitted from the front bumper to suggest sensors.

The £460k grant was awarded as part of the Advanced Research and Invention Agency’s (ARIA) broader £59M Safeguarded AI programme and will be used to identify how frontier AI models could be safely deployed within a full Automated Driving System (ADS) context in simulated environments.

Despite significant investment in automated driving technologies from both established companies and tech startups alike, the development of safe automotive driving systems remains challenging. Numerous high-profile recalls, and incidents such as the collision in Tempe Arizona, highlight the difficulties faced in predicting road conditions and addressing them in an autonomous vehicle’s AI models. The project, Towards Safety Assurance of Frontier AI for Automated Driving Systems or SAFER-ADS, aims to explore these difficulties by bridging the gap between theoretical solutions and current safety assurance in the automotive industry. 

The team, led by Professor Simon Burton, Chair of Systems Safety and Business Lead, along with Professor Radu Calinescu, Dr Kester Clegg, Dr Jie Zou, Dr Ioannis Stefanakos, and Dr Sepeedeh Shahbeigi, will develop simulations of real-world scenarios to assess the impact of scaling safety assurance from narrow-use machine learning components (such as object classification) to broader-scope AI-based functionality. A case for assuring the safety of the ADS will be created alongside this work, to be compatible with current automotive safety standards and regulation. 

Speaking on the project, Professor Simon Burton said: “The increasing use of advanced AI in safety-critical applications, where human intervention is removed, demands robust safety arguments. Traditional software assurance methods are insufficient for complex AI systems operating in ambiguous real-world scenarios. Current AI verification approaches, relying on potentially flawed data, lack the rigor required by established safety standards.

The Safeguarded AI program offers a unique interdisciplinary platform to enhance the rigor and confidence in safety assurance for high-risk AI systems.  It addresses the critical challenge of defining and demonstrating "safe enough" for complex AI.”

ARIA’s Safeguarded AI Programme Director, David ‘davidad’ Dalrymple, said: 

“AI could unlock transformative improvements in our critical infrastructure, but right now adoption is limited because without ironclad safety assurances, we risk unintended and damaging consequences. Our goal is to prove that it’s possible to develop AI with quantitative safety guarantees, and that this could unlock significant economic value for the UK.”

SAFER-ADS is the second project within the CfAA to be awarded funding from the ARIA's Safeguarded AI programme. Professor Radu Calinescu’s ongoing 'Universal Stochastic Modelling, Verification and Synthesis Framework (ULTIMATE)' project is already contributing to the programme's effort to achieve guaranteed safe AI through the modelling of probabilistic and nondeterministic uncertainty. SAFER-ADS and ULTIMATE will pursue complementary research, collaboratively leveraging their respective strengths to advance the objectives of the research programme.

Through close collaboration with regulatory bodies and key industry partners, the CfAA is in a unique position to ensure the results of the project directly impact future regulations and standards. The recently published AI Opportunities Action Plan highlights the need to accelerate AI development in key areas, such as automotive, and the importance of building trust in AI assurance which CfAA Director, Professor John McDermid, mentions in his breakdown of the action plan