In this report, written by AAIP Programme Fellow Dr Roger Rivett, we present two new ontological models to support work to better identify all of the risks associated with the introduction of autonomous vehicles.
Our methodology for the Assurance of Machine Learning for use in Autonomous Systems (AMLAS), comprising a set of safety case patterns and a process for systematically integrating safety assurance into the development of ML components.
This white paper, published by the Chartered Institute of Ergonomics and Human Factors represents the outcomes of work led by Dr Mark Sujan as part of an AAIP demonstrator. It outlines eight key human factors principles that need to be taken into consideration for the successful design and use of AI in healthcare.
Exploring the challenges involved in assuring the safety of highly automated driving systems, we present a framework for structuring key elements of the argumentation strategy and review the state-of-the-art aligned to each of the elements of the framework.
A new report as part of the Royal Academy of Engineering Safer Complex Systems programme, which provides an initial framework for understanding and improving the safety of complex, interconnected systems in a rapidly changing and uncertain world.
A report written in partnership with Egis for UK Research and Innovation. Part of this work applied an initial framework for the management of complex systems developed by Professor McDermid and colleagues to gain insight into the potential considerations for the use of complex systems in future flight. This analysis supported the development of the Future Flight Aviation Safety Framework.
This briefing paper was written for the Institute of Manufacturing on assuring the safety of autonomy in manufacturing. It provides insight into the safety assurance of autonomous systems in manufacturing. It is a supporting paper to OK Computer, a report commissioned by the Global Manufacturing and Industrialisation Summit (GMIS) exploring the safety and security implications of new technologies.
This report provides a single point of reference on the safety, regulatory and liability issues for operating inspection and maintenance robots in the European Union. It reviews legal frameworks for robotic infrastructure inspection and maintenance, along with relevant standards and best practices in development, verification and assurance. The report is based on work from the Robotics for Inspection and Maintenance (RIMA) project, which AAIP is part of. The RIMA project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 824990.
The paper was co-authored by AAIP’s Professor McDermid and Zoe Porter, and three leading researchers from other UK institutions. It looks at the regulation of future robotics and AI systems from an ethical perspective and summarises other work in this area.
This is a report from a workshop held in January 2021 to contribute to the maturing of, and critical reflection upon, practical guidance on how to address the concern of the ethical implications of autonomous systems.
Professor John McDermid assisted in the writing of a section on software development, verification and validation in the GMG’s guideline for applying functional safety to autonomous systems in mining.
In March 2021 the BSI published PAS 1882:2021, Data collection and management for automated vehicle trials for the purpose of incident investigation – Specification. This is the first consensus standard to enable data collection and management for automated vehicle trials to support incident investigation. AAIP’s Dr Mark Nicholson was part of the steering group which developed this standard.