Tutorial on safety assurance of autonomy and machine learning
This Safety-Critical Systems Club tutorial considers the assurance of systems that employ autonomy and machine learning (ML). This is critically important with the introduction of such systems in many sectors, including air, sea and road vehicles. Societal and regulatory acceptance will not be possible unless the safety of such systems is assured.
AAIP's Richard Hawkins, Nikita Johnson and Mark Nicholson are speakers, alongside James McCloskey from Frazer-Nash Consultancy.
The day comprises:
- Introduction to assuring autonomy - explores the contribution of software assurance to the overall assurance of safety-critical systems. The nature of ML and autonomy disrupts current approaches. An abstract structure to explain the elements of an autonomous system (AS) is used to frame subsequent discussions.
- Technology overview - the mathematical concepts underlying Machine Learning are outlined. Real world applications of techniques such as Neural Networks are considered including their limitations.
- Assuring autonomy (i) - the potential impact of autonomy on the 4+1 principles of software safety is explored. We look at each principle, how it is challenged by Autonomy within a formal framework. The aim is to minimise disruption to current best practice.
- Assuring autonomy (ii) - specific concerns, including security and safety, for AS are covered. Broader issues such as: ethics, competence, regulation and liability are also considered. The approach of using a Body of Knowledge backed by demonstrator projects is elaborated.
- Industrial point of view – James McCloskey, Frazer-Nash Consultancy