Reflections on AI Governance in 2023

News | Posted on Wednesday 8 November 2023

2023 has been a big year for AI governance. Research Fellow, Dr Zoe Porter, explores three key reflections arising from these developments and what they can tell us about the future of safe AI.

A robot hand hovers over a wooden block which has scales printed on the front. The background is shades of light green

We are living in landmark times for AI governance and diplomacy. There has been a recent surge in regulation and law-making. According to the Stanford University 2023 State of AI report, legislative bodies in 127 countries passed 37 laws including the word ‘Artificial Intelligence’ in the past year.

Different jurisdictions are taking different approaches. The UK published its ‘pro-innovation’ AI White Paper in March, which aims to foster responsible AI innovation without imposing new legal duties on developers and operators. The EU’s draft AI Act , in contrast, which is set to be finalised by December, introduces legal requirements on the producers of high-risk AI systems to carry out conformity assessments before deployment, and monitoring after deployment. In the US, President Biden has just issued an Executive Order on AI which sets out new standards on AI safety, security, privacy, equity, civil rights, consumer rights and worker rights. This is in addition to several major international debates on AI - including at the 2023 G7 Summit and the UK’s own global AI Safety Summit in early November. 

At the Assuring Autonomy International Programme (AAIP), we take an interdisciplinary perspective on AI safety, bringing together engineers, computer scientists, physicists, psychologists, lawyers, philosophers and ethicists, as well as doctors and entrepreneurs.  From that background, and in this breakthrough year for AI governance, here are three reflections on the direction, themes and gaps in the global debate on AI safety.

 

  1.  Misuse of AI vs. AI’s inherent limitations

A prominent theme in recent global discussions on AI safety is an overarching concern with the misuse of powerful AI models by malicious and non-state actors. The creative capabilities of widely available generative AI make it extremely vulnerable to being used for exploitation, deep fakes, misinformation and the disruption of democratic processes. But focusing on the risks from foreseeable misuse and abuse of AI, however serious, should not take the spotlight off the fact that the technology has inherent limitations which affect safety of the intended uses of AI.

Highly capable machine learning models display impressive performance, but they are also prone to inaccuracy and ‘hallucinations’. Computer vision systems, for example, can ‘see’ or ‘interpret’ nonexistent objects. Recent attention on AI safety has focused on powerful new generative AI models. Generative AI models are based on cutting-edge transformer architectures and are trained on multi-billion parameter models to generate new content such as text, images, audio, videos and code. Generative AI models can give factually wrong but plausible and authoritatively stated responses to user prompts. This is a serious concern when such systems are relied upon in safety-critical decision-making. 

  1. Long-term existential loss of control vs. near-term gradual erosions of control

Another theme on the agenda at the AI Safety Summit was human control over AI. It was discussed whether very advanced AI could evade human control and pose an existential threat to humanity. Raising this speculative question fills a gap in the current patchwork of AI democracy, but there are urgent but less headline-grabbing issues of control to be addressed.

One common approach to retaining human control over AI is to emphasise that humans must always make the final safety-critical decisions. In the NHS, for example, this approach looks set to become the norm, with guidelines stating that the clinician-in-charge must have the final say on patient treatment when AI has been involved in decision-making.

This seemingly reassuring position demands careful handling. While it may be relatively straightforward for a human decision-maker to ensure that obviously incorrect AI recommendations are not followed, disagreement between the human and the AI system in borderline cases promises to be more problematic. In such cases, should the clinician act on their own judgement or on the AI’s? Early indications in the healthcare domain suggest that accepting an AI system’s recommendation, even when it is nonstandard, has a shielding effect for the liability of the clinician. In a multidisciplinary project with the Bradford Institute for Health Research, we are exploring the ethical and legal implications of different models of clinician-AI-patient interaction, looking at similar questions of how clinicians are influenced by the involvement of AI in the consulting room or surgery, and the impact on clinician autonomy and liability as well as on sensitivity to patient preferences..

  1. Tolerable risk vs. equitable safety

It is important to be clear that risks of harm and wrongs from the real-world use of AI systems will never be completely eliminated. AI safety, much like existing safety processes for other domains, is, in part, a question of what constitutes tolerable or acceptable risk. What severities of risk from AI are strictly impermissible, whatever the context, and what are the tolerance thresholds for severity and likelihood of different adverse consequences? These questions are difficult to answer. Standards bodies, such as ISO and IEEE, publish technical standards that define benchmarks and metrics for AI systems, and it is also encouraging that the question of risk thresholds was on the agenda at the AI Safety Summit. Continued international cooperation on defining tolerable or acceptable risk is essential for progress in AI safety.

But to ensure that AI truly is beneficial, we need to move on from thinking about tolerable risk to thinking about equitable safety. Rather than assume that, so long as the risks are managed, the benefits will inevitably follow and follow for all, what we might call ‘equitable safety’ takes a different perspective. It takes the view that, to justify the risks from the deployment of AI, the reasons for deploying the technology should be compelling in the first place, the  distribution of risk and benefit should be equitable so that the risk-bearers are not endangered for the advantage of others, and people should have some control over AI-mediated decision-making that affects them. In a paper published this year, we set out a framework for reasoning about the ethically acceptable use and deployment of AI systems, based on this central notion of equity.

As we head towards the end of 2023, it’s clear this year has witnessed a raft of significant developments around the world in the regulation and governance of AI. This progress is to be welcomed. AAIP researchers will continue to work on the interdisciplinary elements of AI safety, such as these three reflections, and continue to advise policy makers and regulators to turn the ideals of AI governance into well-formed theories, tools and implementable methodologies.