Skip to content Accessibility statement

From Innovation to Assurance: Addressing Safety Challenges in AI and Autonomy

News

Posted on Wednesday 15 October 2025

Our first Centre for Assuring Autonomy Symposium brought together over 100 professionals and academics from across the engineering, safety assurance and regulatory landscape to ask - what does the future of autonomy look like?
A welcome to the CfAA Symposium is on a large projector screen and there are four empty chairs underneath the screen

Held in York, our Symposium programme comprised a mix of deep-dives into how our safety assurance guidance and frameworks are being applied and used in industry and by regulators, engaging panel sessions on existing and emerging challenges, and exploring the advance of regulation and its intersection with innovation. 

Different sectors, same challenges

Throughout the full day of CPD and two days of sessions, experts in maritime, healthcare and automotive industries shared their insights and one thread was clear; many organisations are facing the same challenges around safety assurance, regardless of domain. 

This led to one of the overarching takeaways from the event—that safety cases are critical and that these must build upon established safety engineering principles, as well as addressing the new challenges of AI and autonomous systems. 

The sessions focusing on our CfAA methodologies (SACE and AMLAS) emphasised the necessity of viewing AI as an element within a wider complex system. The subsequent discussion on the BIG Argument further explored how to integrate broader ethical considerations into the AI safety case (based on PRAISE), while addressing the urgent need to consider the safety implications of advanced capabilities provided by foundational models like LLMs.

Use of AI in healthcare growing rapidly

On the healthcare front we heard from Dr Margaret Horton of Romion Health, Dr Ernest Lim of Ufonia, and Dr Kate Preston, CfAA, on the use of AI in different areas of healthcare. Attendees were shown an example of this in Dora, a conversational AI agent already deployed in the NHS which is supporting clinicians and patients by providing post-operative telephone consultations.

Of primary concern is the importance of meaningful oversight and the need for any AI or autonomous system to work safely alongside a human operator. However, this discussion raised the question; are claims about the benefits of AI outpacing the realities? Particularly as many healthcare settings in which AI, or autonomous robots for example, could be deployed are complex. This, along with calls for broader regulation, may shape what we see emerging from this sector over the next year.

Focus on functions for successful maritime autonomy

Across the two days of the Symposium we heard from maritime experts Kevin King, from BAE Systems, Odd Ivar Haugen, from DNV, and Dipali Kuchekar from Lloyd’s Register, where the discussions centred around the need to consider the many functions which exist within maritime operations, rather than take a purely systems-focused approach. Regulation too, played a large role in audience questions and panelists discussions. A key insight for us is the need to address the whole maritime autonomy infrastructure and systems of systems, not just to focus on (safety of) individual vessels. This is likely to be a focus of our new partnership with DNV

Energy sector tuned in to regulation

Both Jonathan Thurlwell of Ofgem, and Andrew White from the Office for Nuclear Regulation, shared similar views in their presentations when it comes to regulating autonomous systems and AI in the energy sector. Notably, the UK is primed to regulate innovation in these areas because UK law, rather than being prescriptive, is goal-oriented. This is a sensible approach, and one which encourages growth without sacrificing safety which is favoured by regulators and this position was echoed by fellow speakers and guests. 

The Symposium showed how valuable ongoing engagement with regulators can be and why regulators also need support from experts like those at the CfAA to inform and shape guidelines, especially enabling them to be robust in the face of rapidly-evolving technology. 

AI continues to hold attention

Throughout all our sessions around safety assurance, AI, be it an LLM or Agentic AI, was a central pillar of conversation. From whether it can be used to develop safety arguments (the consensus was no - due to the significant consequences arising from any failures), to asking, do we even need AI? A key insight is that systems engineering processes need to analyse the pros and cons of use of AI before deciding to adopt it, and this must be a precursor to the use of AMLAS.

Watch our Symposium round up video

10 takeaways from the CfAA Symposium

  1. Taking into account the context of a system and its operating environment is central to creating safety arguments. 
  2. Safety cases must build upon established safety engineering principles and practices, as well as addressing the specifics of autonomy and AI.
  3. There will be a need for trade-offs when it comes to safety and ethics, and these will likely have a ripple effect on system design and operation. 
  4. Autonomous systems and AI exist to help humans, but we must remember that humans aren’t a monolith and system design should take this diversity into account.
  5. Trust in systems is a delicate balance - too much one way or the other can and does cause problems as we can see from real-world examples. 
  6. As far as regulation goes, whilst clarity exists at a top level many domains, like maritime, require guidance on more specific activities at more granular levels.
  7. Technology outpacing regulation continues to be a challenge affecting both industry and regulators - how do we get more specialists to prevent this growing gap from getting wider?
  8. It is important to work on bridging the gap between the AI world and safety world and bring these two existing cultures closer together.
  9. We should always remember the question “what does good look like?” when it comes to AI and autonomy, and especially safety assurance?
  10. Our guidance is there to be used; organisations that are adopting it are finding it valuable; we’d be delighted to support more organisations using the guidance and will use the experience to improve what we have developed

Our Symposium reinforced the value of sharing best practice across organisations and industries, and reflected the growing importance of safety cases to demonstrate the safety of autonomous systems and AI. It showed there is a clear appetite for cross-collaboration with input from academia, industry and regulators to identify and implement workable solutions to these challenges. 

If you would like to work with us to tackle these existing or emergent issues around safety assurance, email: assuring-autonomy@york.ac.uk.