Tutorial on safety assurance of autonomy and machine learning

Seminar
This event has now finished.
  • Date and time: Thursday 26 September 2019, 9am to 5pm
  • Location: DoubleTree by Hilton Hotel London - Victoria
  • Audience: Open to Safety engineers and others working on the assurance of safety-critical and autonomous systems
  • Admission: Various, booking required

Event details

This Safety-Critical Systems Club tutorial considers the assurance of systems that employ autonomy and machine learning (ML). This is critically important with the introduction of such systems in many sectors, including air, sea and road vehicles. Societal and regulatory acceptance will not be possible unless the safety of such systems is assured.

AAIP's Richard Hawkins, Nikita Johnson and Mark Nicholson are speakers, alongside James McCloskey from Frazer-Nash Consultancy.

The day comprises:

  1. Introduction to assuring autonomy - explores the contribution of software assurance to the overall assurance of safety-critical systems. The nature of ML and autonomy disrupts current approaches. An abstract structure to explain the elements of an autonomous system (AS) is used to frame subsequent discussions.
  2. Technology overview - the mathematical concepts underlying Machine Learning are outlined. Real world applications of techniques such as Neural Networks are considered including their limitations.
  3. Assuring autonomy (i) - the potential impact of autonomy on the 4+1 principles of software safety is explored. We look at each principle, how it is challenged by Autonomy within a formal framework. The aim is to minimise disruption to current best practice.
  4. Assuring autonomy (ii) - specific concerns, including security and safety, for AS are covered. Broader issues such as: ethics, competence, regulation and liability are also considered. The approach of using a Body of Knowledge backed by demonstrator projects is elaborated.
  5. Industrial point of view – James McCloskey, Frazer-Nash Consultancy

Partners