Delivering a safe and inclusive AI future for the UK

News | Posted on Thursday 19 October 2023

Ahead of the upcoming AI Safety Summit, we set out three key approaches which we believe will help regulators and policy makers work towards the goal of answering the big question: 'is it safe?'.

A photographic rendering of a succulent plant seen through a refractive glass grid, overlaid with a diagram of a neural network.
Image by Alan Warburton / © BBC / Better Images of AI / Nature / CC-BY 4.0

The proposed agenda for the AI Safety Summit has been met with mixed response, with some claiming it does not go far enough in terms of regulatory aspects. As global leaders in AI safety and assurance systems, it’s our position that maximising the potential benefits of AI in society, and doing so safely, centres around three main themes; evidence, community, and skills. 

We have outlined below what this could look like for policy makers. For our more detailed response download our 'Delivering a safe and inclusive AI future for the UK' pdf.

1. Evidence - use AI safety cases to build public trust

By stipulating the need for and benefit of AI safety cases policymakers can foster greater public acceptance and trustworthiness of AI, whilst also providing organisations with a clear path to the safe deployment of AI enabled systems. 

2. Community - harness the expertise of existing safety experts

The UK has a strong history and track record in safety. This is important because it means the UK is in the position to build in safety from the start by taking a whole systems approach which can shape and inform safety culture around AI. 

3. Skills - preparing people to work alongside AI

The provision of training for industry, regulators and policy makers encompassing different sectors, knowledge levels, and organisational levels is central to the UK’s decision making around the implementation of AI. It will enable us to ask the right questions at the right time from a position of innovation and credibility.


The Assuring Autonomy International Programme at the University of York includes the UK’s foremost experts in safety-critical systems and the development and deployment of safe AI.  We have over 35 years of experience researching safety critical systems, including safe machine learning, and safe autonomous systems.

Safe AI guidance for policy makers (PDF , 2,151kb)


For more information on our research, guidance, collaborations or training please email us