Responsible AI is not an answer but a question

The 2024 Andrew J Webster Lecture.

Speaker: Professor Jack Stilgoe, University College London.

In recent months, you would be hard placed not to have had a conversation with someone about responsible AI (RAI). Since STIS has always been interested in responsibility, it’s great to see the focus of this lecture “The organised irresponsibility of artificial intelligence”, Professor Jack Stilgoe, University College London, Thursday 5 September 2024, Tempest Anderson Hall, York.

Jack Stilgoe paid respect to Andrew Webster’s long-lasting impact in the field of STS and started his lecture in a sensible place - asking the audience whether they are excited or fearful of AI. Most responded with ‘somewhere in the middle - a little excited, a little uncertain’. And this is representative of so much of the debates concerning AI in society. We are led to believe that AI may save us, or and at the same time even, be the end of us. Such dominant tropes are not helped along in this regard of course by the hype we read and see in mainstream media or in science fiction. But the reality is we are having to make trade-offs, however we feel.

Such is the topic of Jack’s talk. Jack is front and centre in the debates concerning responsible AI and for good reason. He has a wealth of experience in STS and can talk the language of policy too having spent many years at Demos. Jack highlights the Responsible AI collective which works to connect researchers working across disciplines and in this area and also suggests that responsible AI is a bit of a problematic term, a sales pitch, even. This is especially in light of how it has been taken up by Silicon Valley and the tech industry when we do not know the intentions behind it. Whatever their intentions, the responsible AI team sees responsible AI not as an answer but as a question. 

Jack provides a flavour of the issues concerning the RAI community; deep fakes, copyright, ownership, authorship, financial interests, blame, accountability, duty and many more. We are at a moment with generative AI, he says, where society is forced to make sense of what they do, evoking a sense of assumed responsibility, but the patterns that underlie these events, Jack says are symptoms, products of this system of organised irresponsibility. A deliberate arrangement where accountability is absent. This is a warning that democracy is at risk, because AI is a sociopolitical power which is concentrated in a small number of companies and there is a kind of avoidance of responsibility in mind.

“With great power comes great responsibility.” I enjoy a lecture where spider-man is quoted. Jack goes on to suggest that Science and Tech has plenty of examples where scientists have worked to offer evidence or fixes to society’s problems, and also the example of Oppenheimer, the story of a scientist who personifies the science of power and consequences. The relationship between scientists and society is at the heart of Jack’s lecture which forces us to think about how far the role of the scientist is to shape and influence society. Since Oppenheimer, “we worry and expect more from science and new technologies” Jack says. This analogy is helpful in drawing out the collective and structural issues behind science and technology. Jack introduces the perils of AI, processes and purposes of AI, suggesting that these are neglected when we should be clearer on the risks of AI, the intention behind its purposes - avoiding it at our peril.

Jack gives a really clear and stark overview of the case of Uber and the self-driving car. The example reveals the many faceted ways we look at responsibility and accountability. Looking at this as something we can learn from is key - looking at risks, how they are imagined and mobilised. Jack introduces the developments which are taking place in governments worldwide on AI safety and the ways in which we can limit AI. Some talk about human alignment, Jack says - a set of agreed human values, yet we know these may be in conflict. This is similar to Kate Crawford’s work on AI and warnings about how a fascination with inevitability may distract from the ways AI is in fact human made. The visions of many are to seek to make Artificial General Intelligence (AGI), Jack says, but so often without figuring out if it’s needed or what it’s even for. Do we even know what AGI really means? 

AI will underpin everything, Jack says. One thing we could and should do is to bring the public into discussions about AI and the future of this technology.

Write up piece by Dr Jennifer Chubb

Watch Jack's lecture in full