Explainability

Assurance objective: Be able to provide explanations, when required, for decisions taken by the system.

Contextual description: It is often important for assurance that explanations can be provided as to why a particular decision was taken by the system in a particular set of circumstances. There are four main reasons why explainability is important:

  • Explain to justify – It may be necessary as part of the assurance or regulatory process to provide a justification for why a particular decision was taken.
  • Explain to correct – When an algorithm is being trained, in order to improve its performance, it may be necessary to correct errors that are made by the algorithm (such as mis-classification). Correcting errors successfully may require explanation of why the incorrect decision was made by the algorithm.
  • Explain to improve – If the performance of an algorithm needs to be improved, an explanation of decisions taken may help to identify how improvements can be achieved most effectively.
  • Explain to discover – To ensure that the learning process is effective, it may be necessary to have an understanding of parameters or characteristics that have a significant impact on what is learned.

Practical guidance:

General guidance on how to ensure decisions are explainable in a manner that is comprehensible by a human.

Specific guidance on the use of explainability for specific goals will be provided against the particular assurance objective.

Interpretability techniques:

Interpretability requirements:

Interpretability evaluation:

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH