AI Action Summit shows safe AI is a global problem that needs collaborative solutions
The AI Action Summit in Paris was a far-reaching and broad look at the breathtaking advance of AI in our world. Across the five days of events around the common theme of the benefits and risks of Frontier AI it became clear that safety – in all its forms – is a global challenge with many different facets. Here our director, Professor John McDermid OBE FREng, shares his insights as both a guest and participant in the summit.

Safety, trust and uncertainty
Privacy as we know is a key safety concern when it comes to Frontier AI. At the AI, Science and Society event held at the Institute Polytechnique on the 5th and 6th of February, a brilliant keynote from Michael Jordan explored the idea of stakeholder willingness to accept some loss of privacy for perceived benefits and how this view may help shape and inform regulation. The talk also entertained the emergency of a new discipline for responsible AI, one which blended computer science, statistics and economics. However, there is arguably a need to add safety engineering, sociology and ethics into the mix because as we’re witnessing through AI-related incidents the trade-offs are more complex than merely a two-party interaction in game theory.
The theme of trust continued in a half-day session trustworthy AI which was very varied in perspective. Perhaps the most interesting observation was from Dame Wendy Hall that trust arises from keeping commitments – this is particularly pertinent as many of the leading AI development companies published or updated their Frontier AI Safety Commitments (FAISC) at the summit, but we have yet to see how they keep those commitments. How we guard against the harms from Frontier AI is something I wrote about at the end of last year when the results were first published.
Safety and sustainability
Sustainability featured highly across the presentations and discussions in the summit. AI’s carbon footprint cannot be ignored, neither can the amount of water needed for cooling data centres. Sasha Luccioni from HuggingFace presented some interesting data on the carbon footprint of LLMs, and this was the first time these issues had been raised at such a summit. However, with the recent release of the Tony Blair Institute for Change report on AI and innovation in the Global South and the need for sustainable roll out mentioned in the UK’s own AI Action Plan this is a very significant and prominent concern. It was also timely that the UK’s Royal Academy of Engineering published a report on the foundations of environmentally sustainable AI) which considers critical materials as well water and energy.
In a more caustic talk Yan LeCun spoke at great length about large language models (LLMs) and their poor ability to learn by comparison to human babies – in terms of speed, but also what babies learn such as understanding of the physical world, ability to remember things, and to plan. This is illustrated below in a standout stat which identified that a four-year-old has seen more data than is used to train an LLM! He also noted Moravec’s paradox that some things are easier for humans than machines, and vice versa – which perhaps helps explain his scepticism.
Safety and AI’s pace of change
Yoshua Bengio presented the International Report on the Safety of Advanced AI which I had been fortunate enough to have the opportunity to contribute to. He summarised what the report covered, i.e. capabilities, risks and risk mitigations. To him the biggest concern was the pace of change – significant even since the report was finalised late in 2024. This led into a discussion on the report and the state of Frontier AI – concerns included agency, the lack of “self-awareness” of LLMs, the concentration of power with a few organisations and the risks around cyber-security. It was generally agreed that risk mitigations needed to cover policy as well as science and engineering and that effective regulation can aid innovation.
This was further cemented during an informal and wide-ranging discussion event organised by the Centre for the Governance of AI at Les Salons de l'Hôtel des Arts et Métiers demonstrating the significance and need for international collaboration on AI Safety.
Safety, governance and regulation
One of the highlights, for me, was a meeting and dinner at the Cercle de l’Union Interalliée on Governing in the Age of AI, organised by the Tony Blair Institute. I don’t think I have ever been in the presence of so many current and ex-ministers, from so many countries. This included Peter Kyle, the UK Secretary of State for Science Innovation and Technology giving an enthusiastic talk on the potential benefits for the UK from investment in AI Growth Zones – including a compelling message on the importance of safety and assurance. Also, Her Excellency Josephine Teo, Singapore Minister for Digital Development and Information gave an interesting perspective on how a nation can benefit from innovation using AI, as opposed to developing AI, drawing comparisons with how Singapore prospers in the maritime sector, despite not being a major shipbuilder.
At the “summit proper” in the Grand Palais, I contributed to a panel session on Trust, AI Governance and industry commitments. This was chaired by Audrey Plonk, Deputy Director, Digital Economy Policy at the OECD, and involved Tajuo Imagawa, the Vice Minster at the Japanese Ministry of Interion and Communications, Lisa Soder, a Senior Policy Advisor at Interface IA, Peter Sarlin the CEO of AMD Silo AI, and Sara Hooker, the VP of Research at Cohere – a fascinating combination of policy, industrial and research perspectives. we didn’t’ reach any firm conclusions, it was heartening and positive to note there was general acceptance of the benefits of inter-disciplinary collaboration particularly drawing on expertise on safety of software-intensive systems for those working on Frontier AI.
Safety and global collaboration
Increasingly we are seeing a recognition of the value of safety systems engineering, academic research and safety assurance at all stages and levels of AI development. The entire summit reinforced this idea but also emphasised the many potential benefits from AI including for understanding the climate and the ocean. A key for all of us is to understand better how to obtain the benefits from AI whilst managing the attendant risks.
The format of formal sessions and side discussions meant that I was able to exchange ideas with people from AI Safety Institutes in the UK, Canada, France and Japan, hold conversations with people from the UK’s Advanced Research and Innovation Agency, and Matt Clifford who was the primary author of the UK’s AI Action Plan. The enthusiasm which he and Peter Kyle exude is very promising for the UK, especially in our ability to influence the responsible introduction of AI.
I was also able to make (and renew) contacts from academia, government and industry across four continents. Nowhere else could I have got this diversity of contacts and views. Overall, the summit reconfirmed my views that we are dealing with a global problem, and that the community needs to find ways to work together across geographical, disciplinary and cultural divides to capitalise on the potential benefits from the use of AI whilst controlling the attendant risks.