Blog post: Accidental autonomy

News | Posted on Friday 8 November 2019

Boeing may never have conceived the Boeing 737 's Manoeuvring Characteristics Augmentation System as an autonomous system. In effect, however, it was autonomous: it took decisions about stall without involving the pilots.

Boeing’s 737 Max has been widely reported on recently, following two similar, fatal accidents within five months of each other. Reports suggest that the aircraft’s Manoeuvring Characteristics Augmentation System (MCAS) was to blame, or at least a major causal factor in the accident.

Boeing may never have conceived the MCAS as an autonomous system. In effect, however, it was autonomous: it took decisions about stall without involving the pilots — indeed, at the time of the first accident, the pilots didn’t even know that MCAS existed.

If we consider then, that the Boeing 737 Max MCAS is in fact an autonomous system, this tragic situation illustrates the technical, ethical and governance issues that the individuals and organisations developing, regulating, and using robotics and autonomous systems (RAS) are already grappling with.

Technical

Critical Barriers to Assurance and Regulation (C-BARs) are those issues which, if not resolved, might lead to either:

  • an unsafe system being deployed (as in the case of the Boeing 737 Max); or
  • safe systems not being deployed (as regulation is too restrictive)

With the Boeing 737 Max the central C-BAR is “handover”, where a human operator (pilot) is required to take back control of the RAS (aeroplane). How do we ensure (and assure) that this can be done safely and effectively?

In the Ethiopian Airlines crash the pilots tried on multiple occasions to override the MCAS, but ultimately were unable to do so.

To be able to take over effectively the pilots need knowledge and time. Knowledge in part arises from understanding the aircraft and from training. Time comes from the design of the system — enabling pilots to assess the situation, and providing information for them to work out how to handle the problem. These principles can be used to guide design, to ensure safety; analysis of the system, operating procedures, training, etc. can be used to assure safety.

Ethical

When lives are lost in any situation ethical questions about who is to blame are raised. When the decision that has caused the loss of life has been taken by a machine (the MCAS in this case) the ethical spotlight moves to the designers who built the system (and reportedly didn’t tell pilots about it being on the aircraft), those designing the training for the pilots flying the plane, and the regulators who allow the system to be deployed.

It would be inappropriate to make ethical judgements before the accident reports are finalised. However, there is an ethical principle that “ought implies can”; so if pilots ought to do something to ensure safety, then it is essential that the designers ensure that they can do that, e.g. override MCAS, effectively. Perhaps this principle will provide a useful perspective on the accident reports, once they are published.

Governance

RAS, just like aircraft such as the Boeing 737 Max, are developed, operated and regulated by organisations rather than individuals. The Boeing 737 Max was designed and manufactured by Boeing and certified by the Federal Aviation Authority. Questions have been raised about just how much the FAA relied on Boeing itself to certify the 737 Max.

Mary Schiavo, a former Inspector General at the US Department of Transportation, said: “At the FAA, they know they’re outgunned by Boeing. They know they don’t have the kind of resources they need to do the job they’re tasked with doing. They pretend to inspect, and Boeing pretends to be inspected, when in fact Boeing is doing it all almost entirely by itself.”

We need a framework for ethical governance for organisations developing or operating RAS (including aircraft that utilise autonomous systems within them). We need to consider what this will look like and how it will be implemented.

Assured autonomy

Tragedies such as these two Boeing 737 Max crashes bring about an acute refocus on safety. Levels of aviation safety are generally high: the aviation safety network reported that in 2018 the accident rate was one fatal accident per 2,520,000 flights.

This situation in which we find ourselves now shows that assuring the safety of robotics and autonomous systems is vital: we must be able to explain why a system has taken the decisions it has, establish how we can assure a safe handover from system to human, and ensure this is understood by all stakeholders (the public included).

Ensuring and assuring safety of complex computer-controlled systems is challenging. As the 737 Max accidents have shown, current assurance processes are at their limits dealing with such systems; the increasing introduction of autonomous systems will require new and improved technical processes, but also better means of communicating the results of the assessment to key stakeholders.

Professor John McDermid OBE FREng
Director
Assuring Autonomy International Programme 


Read the blog on Medium