Skip to content Accessibility statement

Body of Knowledge definitions

We are trying to capture here what is meant by these terms as used in the assurance objectives in the BoK. Where alternative definitions are required as part of guidance material in the BoK (for example if domain specific guidance uses the term ‘Hazard’ in a different way) these terms may be redefined for that purpose, but the standard definitions below should remain stable as the default throughout the BoK. These definitions should be used consistently throughout the AAIP.

Accident

An unintended event or sequence of events leading to harm.

Argument

A series of claims intended to establish the truth of a conclusion.

Assurance (n)

Justified confidence in a property.

Assurance argument

An argument used to demonstrate assurance based upon the available evidence.

Assurance case

Arguments and evidence intended to demonstrate assurance.

Assurance case pattern

A means of documenting and reusing assurance argument structures.

Assurance deficit

A specific source of epistemic uncertainty caused by a lack of knowledge or information.

Attack scenario

An event or sequence of events through which a vulnerability may be exploited.

Autonomous

Having autonomy.

Autonomy

The capability to make decisions free from human control.

Automatic

Able to operate independently of human control

Component

Element that forms part of a system.

Conformance

Fulfillment of requirements.

Failure mode

A specific way in which failure may occur.

Formal verification

Verification using mathematical methods.

Hazard

A condition of a system that can develop into an accident through a sequence of normal events and actions.

Hazardous behaviour

Behaviour that may result in a hazard.

Hazard risk

The product of the severity and probability of a hazard.

Incident

An event which significantly degrades safety margins, but does not lead to an accident.

Machine Learning (ML)

Getting computers to learn from data in the form of observations and real-world interactions in order to create a model of the real-world.

Random failure

Failure due to random events, most commonly resulting from physical causes, that can be characterised by statistical failure models.

Regulation (n)

A set of rules or directives.

Regulatory authority

An organisation that can make, maintain or enforce regulations.

Reinforcement learning

A type of machine learning that allows computers to determine their required behaviour through exploration within a specific context, in order to maximise some notion of cumulative reward.

Residual risk

The risk that remains once all risk reduction measures have been taken.

Risk

The product of severity and probability.

Robot

A machine capable of carrying out a complex series of actions automatically.

Robotics

The design, construction, operation, and use of robots.

Safety

The degree of freedom from hazard risk.

Safety assurance

Justified confidence in safety.

Safety justification

An evidence-based justification of safety assurance.

Safety requirement

Description of a property or behaviour required to ensure safety.

Simulation

A model of a real-world situation on a computer.

Static analysis

Evaluation without operation.

System

A group of interacting or interrelated elements that form a unified whole.

Systematic failure

Failure due to flaws in specification, design, manufacture, installation or maintenance.

Testing

Evaluation through operation.

Validation

The evaluation of the correctness of a specification.

Verification

The evaluation of compliance to a specification.

Vulnerability

A weakness which can be exploited to perform an attack against assets.

Further discussion of ‘autonomy’

The Programme takes the view that the key difference between manually controlled and autonomous systems is that the RAS has decision-making capability and authority. This is what is meant by decisions free from human control. All software implements decisions in a sense, e.g. taking an else rather than a then branch. However, the intent is that the decisions are those that might otherwise have been taken by humans and that require intelligence, situational understanding and freedom, in the sense of individual autonomy, e.g. stopping at a red light, or categorising an object as a person rather than a lamp-post.

The notion of “taken by humans” is not sharply defined, and we might define some systems, e.g. a kettle which shuts-off when the water is boiling, as automatic not autonomous. In general, we would expect the term autonomy, rather than automatic, to be used where: 

  • there is an open environment, e.g. as in driving on the roads, as opposed to a closed environment which is well-defined and understood;
  • the range of options in decision-making is very large and may not even be bounded;
  • there is considerable uncertainty in assessing the situation and/or choosing a course of action (making a decision).

In practice, the BoK will provide guidance in a way which reflects the particular challenges, e.g. open vs closed environments, and will not be constrained by whether or not some RAS is viewed as automatic as opposed to autonomous. In many domains standards or other documents define levels of autonomy from full human control, via shared human-machine decision-making (or the possibility of handover from machine to human), up to “full autonomy”, consistent with the definition given above. The intent is that the definition is interpreted flexibly, and would include shared human-RAS decision-making, not just “full autonomy”. Dictionary definitions of autonomy use phrases like “freedom from influence and control”. We have deliberately excluded “influence” as we would expect RAS to be influenced by the operating environment, e.g. behaviours of other cars or pedestrians in autonomous driving, and behaviour of other ships in maritime autonomy.