Centre for Assuring Autonomy announces world-first comprehensive safety argument to assure AI
The Centre for Assuring Autonomy has published a comprehensive approach to safety cases for the assurance of AI and autonomous systems.
The Balanced, Integrated and Grounded (BIG) argument addresses AI safety at both the technical and the sociotechnical levels and takes a whole system approach to AI safety cases, demonstrating how the entire safety argument can be brought together.
For decades safety cases have been an accepted means for assuring the development, deployment, maintenance and decommissioning of safety-critical systems. However, as AI and autonomous systems become wide-spread across society the use of safety cases to assure such technologies is becoming increasingly important to enable developers and regulators to address the emerging challenges these technologies present.
BIG introduces, for the first time, a sustainable, scalable and practical solution to these challenges, and, importantly, demonstrates that prioritising safety doesn’t need to come at the expense of innovation.
The BIG argument builds on the three leading safety assurance frameworks and methodologies developed by the CfAA. These are:
- Principles-based Ethics Assurance (PRAISE)
- Assurance of Autonomous Systems in Complex Environments (SACE)
- Assurance of Machine Learning for use in Autonomous Systems (AMLAS)
More significantly, as well as addressing concerns with autonomous systems and robotics, BIG enables and supports the safe deployment of frontier AI models addressing a critical gap in development and deployment.
Similarly, BIG refines the ethical claims about AI safety, considering them in more detail within both the context of the wider system (e.g. a fleet of autonomous mobile robots) and the social and organisational setting in which it is deployed (e.g. delivery of groceries in urban environments). It offers considerations for an autonomous system’s ability to safely operate within and beyond its defined context and human-machine interactions. It highlights the multidisciplinary, participatory and sociotechnical nature of safety assurance for complex AI-based systems, especially when granted more autonomy and deployed in open environments.
CfAA Director, John McDermid OBE FREng, co-author said: “The BIG Argument represents an important step in the integration and consolidation of different aspects of safety assurance like our SACE and AMLAS methodologies. It creates a cohesive approach that is applicable to many domains and sectors such as maritime, automotive and healthcare and is an exciting next step in the evolution of our work here at the CfAA.”
Dr Yan Jia, Lecturer in AI Safety, co-author on the paper said: “We were able to develop The BIG Argument because of our extensive expertise and existing work in this area. It involved a multidisciplinary approach by bringing ethical, system engineering and AI knowledge together to comprehensively address the safety assurance of AI. Our methodology was grounded in hands-on experience from real-world AI projects across diverse industries, including healthcare, aviation, and automotive. These practical insights shaped the Big Argument approach, guiding the safe design of AI components throughout the development lifecycle, the safe integration of AI components into the system, and the evaluation of wider ethical impacts of the system.
Professor Ibrahim Habli, CfAA Research Director, added: "We believe the BIG Argument is an actionable position paper which responds to the increasing questions being brought to us around the safety assurance of systems into which Frontier AI has been added. We know general purpose AI and Frontier AI are here to stay, so it’s important that we engage with and develop ways to broaden traditional safety assurance to incorporate these technologies. We believe The BIG Argument is the first step in achieving this.”
Professor Sarah Thompson MBE, Acting Director Institute for Safe Autonomy said: “The publication of The BIG Argument paper comes at a critical time in the UK's AI future. As the assurance pillar of ISA, it's great to see the CfAA be the first to come forward with a practical and applicable approach to AI safety cases, which takes into account the emerging challenges of Frontier AI.”
The publication of The BIG Argument paper aims to improve transparency and accountability for the safety of AI, ultimately contributing to and shaping the development, deployment and maintenance of justifiably safe AI systems.