Blog post: Human control of AI and autonomy: the art of the possible

News | Posted on Tuesday 17 September 2019

Human control of artificial intelligence (AI) and autonomous systems (AS) is always possible — but the questions are: what form of control is possible, and how is control assured?

AI and AS will bring huge benefits to society. But if they go wrong then they have the potential to do harm to people, society and the environment.

To mitigate this risk, many principles for the governance of AI and AS advocate ‘human control’ — but what does this mean, how can it be realised as a guiding principle for responsible design, and how can the resulting system be assured?

We can exercise human control to prevent harm in two primary ways: 

  • operationally, by intervening and overriding the decisions of the AI or AS
  • prior to operation, by assessing whether or not the technology is ethical, safe, legal, etc. and well enough understood to be deployed 

Operational control — an override function

Humans can only intervene and/or override a recommendation, decision or action where they have the time, knowledge and skills to do so.

For a human to decide if an action should be implemented or if an alternative course (including no action) is more appropriate, the time available must be sufficient for the human to be able to understand the system’s recommendation or intended course of action.

The knowledge required to intervene or override an action is significant: it includes understanding of the limitations of the system, of the consequences of actions, and of the attendant ethical issues, e.g. bias or discrimination, breach of privacy, and unsafe behaviour. Knowledge is also needed about the environment in which the action is being taken, since context is critical to deciding whether to take an action, e.g. something that is safe or ethical in one environment may not be in another.

To be effective, this knowledge should be available independently of the AS/AI, e.g. through training or sources of information separate from the system. A system for recommending prison sentences has these sorts of characteristics — here the judge, case law, etc. are independent sources of knowledge. If the knowledge is only available from the system, then there is a real risk that human control might be illusory and the human will act like an automaton, simply clicking ‘OK’ when an action is proposed. In such cases, more assurance in the system is needed.

The key human skills are the ability to effectively implement the intervention or overriding action. The more familiar the action (the more often it is practised) the more likely the action will be carried out effectively. There are several ‘shaping factors’ here too. For example, if an action was familiar but is now rarely carried out, as might become the case with highly automated driving, then skills ‘fade’ and what was once a well-rehearsed action may not be completely effectively, if called upon. Further, the use of AI and AS may require the acquisition of new skills, as systems which were once familiar change with the introduction of their new capabilities — and the previous skills may no longer be relevant. This raises the question of how the new skills are acquired — and this might mean there is a need for specific, new, training with the introduction of AI and AS into systems.

The time, knowledge and skills requirements can be interpreted as criteria for responsible design that can be applied to different AI/AS in their context of use. This is quite classical, and well understood, although the details of a given case may be highly complex. However, the more the knowledge depends on the system, the greater the requirements for assuring the system rather than the ability of the human to exercise control.

Pre-operation — to deploy or not to deploy

Sufficient confidence is required, prior to operation, that the system can be used safely, ethically, and legally.

If it is not physically possible to exercise operational control — there is not enough time, knowledge or skill for humans to override the system — then enough confidence has to be established prior to operation that the system is ethical, safe, legal, etc. In this case human control is exercised through the decision to allow the system to operate, or to prevent it from operating. There are two cases to consider — where the system can be ‘turned off’ without undesirable consequences, and when it has to continue operating for an extended period of time to be ethical, safe, legal, etc.

Pre-operation — an ‘off switch’

Where a system can be made ethical, safe, legal, etc. by turning it off automatically, then sufficient confidence is required, prior to operation, that the system will shut itself down when necessary.

In the case where direct human control is not possible, but simply shutting down the system is acceptable, then systems can include an automated version of the human intervention. We will refer to such a mechanism as an ‘automated off switch’.

As an example, an ‘automated off switch’ is provided in computerised stock market systems where trading in certain shares are suspended if the values go outside a defined range. Here, assessment of the system can focus on getting confidence that the ‘automated off switch’ can reliably detect any undesirable behaviour and act fast enough to prevent harm.

As with the human override the ‘automated off switch’ is quite a classical engineering approach, and one familiar to regulators, although it is made more complex where the system employs machine learning and can adapt in operation.

Pre-operation — state of minimum risk

Where a system can only be made ethical, safe, legal, etc. by continuing to operate until it reaches a minimum risk state (MRS), then sufficient confidence is required, prior to operation, that the system can attain an MRS when necessary.

In the case of autonomous vehicles (AVs) on the road, it is common to identify an MRS, such as coming to a stop at the side of the road, and for the system to have to continue to operate until an MRS is reached.

The notion of continual operation until an MRS is reached is most readily illustrated from a safety perspective. Reaching an MRS is more or less challenging depending on the environment. For an AV, it might only be necessary to keep the system operating for a few tens of seconds, e.g. to slow down and pull to the side of the road. For a maritime system, normal operations might need to be continued for several days to get a vessel to port, or an alternative way of achieving an MRS might be to stay in the same place, and then to send out a rescue mission to bring the vessel to port.

As with the other cases the MRS approach can be realised, at least from a safety perspective, through classical engineering and regulatory practices, and the idea is beginning to be adopted explicitly in some domains, e.g. AVs.

Assurance is critical

Assurance — justified confidence — is needed in the safety and other properties of the system, and in the effectiveness and reliability of the human controls.

In all of the cases above, the human is in control, but progressively less directly as we proceed from human operational control through intervention or override, via confidence in an ‘automated off switch’, to confidence in the ability of the system to reach an MRS.

For human intervention, it is necessary to assure that the humans have sufficient time to take control, the right knowledge to do so, from sources of information independent of the AI/AS, as well as having the requisite skills. Such assurance needs also to consider psychological factors — will the humans take the necessary steps even if it is right for them to do so (there is evidence that humans may not be able to sustain concentration and may be unwilling to override technology they have learnt to trust).

Demonstrating assurance of an ‘automated off switch’ involves showing that the system can detect when it is about to breach the boundary of acceptable behaviour and to implement remedial action. With complex systems the difficulties include showing that these boundaries can be detected reliably and in a timely manner, in all scenarios of use, and that the action to shut down the system is itself safe, secure, etc. A similar level of assurance would be appropriate where there is human control, but no independent means for the human to obtain the knowledge to make a decision about whether or not to switch off the system.

Demonstrating assurance in the automated transition to an MRS is the most challenging as we have to be able to show continued safe operation in the presence of failures or malfunctions, including of the AI components. It is common to produce an assurance case for a complex system, and for the acceptance of the assurance case to be subject to human approval — often by a regulator. Such governance mechanisms should continue for AS in regulated domains, and be introduced in other domains including for AI, e.g. recommender systems, but it will also be necessary to ensure that regulators have the necessary knowledge and skills to exercise this oversight and overall control.

Human control is always possible

The nature of the control varies with the type of system and opportunity for intervention. As more reliance is placed on the technology itself to achieve the desired outcomes — safety, security, privacy, etc. — the more stringent, and complex, are the requirements for assurance.

Professor John McDermid OBE FREng
Director
Assuring Autonomy International Programme


Read the blog post on Medium