How could the General Election affect AI regulation?
As the campaigning ramps up ahead of the General Election we look at what might happen to the future of safe and responsible AI.

The current AI outlook
In the last year AI has been a key focus of the incumbent government; it hosted the first global AI Safety Summit, launching an AI white paper consultation, and discussion paper which our director, Professor John McDermid contributed to and, most recently, releasing its follow-up statement in February this year which continues to centre a pro-innovation approach.
The regulatory approach favoured is non statutory for the time being, and whilst we may see more legislative approaches in the future should the current government remain, the onus for safety remains with the technology companies and developers of AI technologies.
This is all against the backdrop of developing regulations in the EU and USA. Just this week new EU rules are set to come into force after member states backed a deal made at the end of last year. The European Union AI Act is one of the most rigorous legislative approaches we have seen.
Add to this the International Scientific Report on the Safety of Advanced AI, which notes that academic institutions are being priced out of working on AI due to high computational costs and the more competitive salaries offered in industry, and it raises important questions about the future of safe and responsible AI.
Differing regulatory approaches
The opposition has been vocal about taking a more stringent approach. Last summer Labour leader, Sir Keir Starmer, said publicly that Labour would set up an AI regulator and a robust regulatory framework. If this were to happen it would see a shift to mandatory participation and put the responsibility for safe AI development into the new AI regulator. It’s possible that this approach may benefit safety assurance of AI and AI-enabled systems. We know from our own work with regulators that safety is of key importance in the implementation of any framework and guidance. Additionally that safety assurance would specifically need to be evidenced and verified by independent organisations like the Centre for Assuring Autonomy.
The view from the Liberal Democrats is now clearer since the launch of their manifesto. Lib Dem MPs have, in past interviews, said that regulation is a challenge for policy makers, and responsibility for governance of AI systems should remain with the government. However, their manifesto states they will create a: "clear, workable and well-resourced cross-sectoral regulatory framework for artificial intelligence".
Should the Conservative government remain in power it is likely they will continue with their outlined plans. A technology-led approach could potentially see the topic of safety fall down the agenda. Additionally, if industry alone continues to drive the development of AI and, more importantly, discussions around safety commercial pressures might lead to the deployment of AI-based systems which have adverse effects on society, as industry will not necessarily focus on risks.
Keeping safety front and centre
Regardless of the outcome on 4th July, continued engagement between institutions like the Centre and policymakers is vital when it comes to safety assurance and verification of AI and AI-enabled autonomous systems.
Important safety research, which has real-world impact, continues to take place in academic institutions. As technologies advance, policymakers will increasingly need evidence-based research and recommendations to make informed decisions about AI technologies, from GPAI to AI-enabled autonomous systems deployed across different sectors.
Notes to editors:
For further comment or interview requests on the future of AI safety and regulation please contact Sara Thornhurst, Communication and Impact Manager, Centre for Assuring Autonomy.