3.1.6 Identifying machine learning deviations

Assurance objective: Identify potential sources of deviation from required behaviour for the elements of the system implemented using ML.

Contextual description: Although sufficient effort is made to provide a ML implementation that satisfies all the safety requirements, it is still necessary for assurance to also explicitly consider mechanisms that might cause an ML implementation to deviate from that implementation during operation. This may include, for example, mechanisms resulting in false positive or false negative classifications as part of the understanding function. In comparison to more traditional systems, it is often more challenging to identify deviations in systems implemented using ML (due to issues of explainabilty).

Practical guidance: To be determined.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH