2.3.3 Verification of the learned model

Assurance objective: Demonstrate that the learned model satisfies the defined safety requirements.

Contextual description: It is necessary to generate evidence that provides sufficient confidence that the learned model will satisfy the relevant safety requirements throughout operation. This will require evidence regarding all defined operating scenarios in the defined operating environment. Evidence may be generated either through dynamic testing, or by static analysis of the learned algorithm.

Practical guidance

For all testing approaches, the focus is on the sufficiency of the test data used with respect to coverage, and a requirement to be disjoint from the learning data. The ML algorithm may be tested through operation of the system itself, or tested on a simulator before integration into the target RAS, or a mix of the two.

For all verification approaches there are challenges associated with the specification and verification of the assumptions it is necessary to make about the environment and operation in order to create usable models. Lack of explainability of learned models can make them hard to analyse.

 

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH