Skip to content Accessibility statement

Why safe AI needs diversity

Ibrahim Habli, Director of the UKRI Centre for Doctoral Training in Safe AI Systems (SAINTS), explains why a diverse research community is essential for safe AI. 

In AI safety, we talk a lot about whether a system is 'acceptably safe'. That phrase is deliberate. Safety is never absolute, and so we're always making a judgement and projecting our own assumptions about what 'acceptable' looks like. That raises an uncomfortable question: acceptable to whom, and under what circumstances? It's that tension that sits at the heart of what we are trying to do at SAINTS (UKRI Centre for Doctoral Training in Safe AI Systems). And it is why, for us, equality, diversity and inclusion (EDI) is not a box-ticking exercise. It is a research imperative.

Why representation matters
AI learns patterns from the data it is trained on and from the metrics, nudges and models we choose. Many of those choices are socially and ethically sensitive. This means who we are matters: our backgrounds, disciplines, stories, experiences, preferences and constraints. Yet the current AI safety landscape suffers from poor representation across many characteristics, particularly ethnicity and gender. At SAINTS, our vision from the outset has been to build a research community as diverse as the society it serves, because the people designing and assuring these systems should reflect the people affected by them.

Tackling the barriers
One of the ways we've addressed this is through recruitment. The challenges are well documented: data from the Higher Education Statistics Agency shows that in 2024/25 only 29% of postgraduate students enrolling in computer science identified as female. Closer to home, analysis of doctoral applications at the University of York shows that candidates from ethnic minority backgrounds are significantly less likely to receive offers than White British applicants. SAINTS worked with the Yorkshire Consortium for Equity in Doctoral Education (YCEDE) to tackle both. As well as running webinars to support students through the application process, we use blind recruitment, redacting applicants' names, pronouns and previous institutions so that candidates remain anonymous until interview.

Progress so far
The results have been encouraging. In our first cohort we offered 11 places, five of which went to female students. In our second, eight out of 13 offers were to female students and crucially, this is balanced across technical and non-technical disciplines. We have also ring-fenced at least one place each year for a Black British candidate who meets our threshold of academic quality, drawing on the success of this approach in the White Rose College of Arts & Humanities and the White Rose Social Sciences Doctoral Training Programme. By investing support at the early stages of the recruitment process we’re also attracting students who have come through industry and apprentice schemes, rather than a traditional academic route. 

Leading by example

Finally, we cannot address the problem of representation if we ourselves are not representative. Across our leadership team we have strived for at least 40% female representation and we come from a range of backgrounds. Whilst some of us come from traditional academia, others have spent time in industry, or are still working as active practitioners, such as Philip Morgan, our law lead, who sits part-time as a judge, and Cynthia Iglesias, our EDI lead, who has advised the National Institute for Health and Care Excellence (NICE) on evidence policy for digital health technologies.

Diversity of perspective doesn't stop at gender or ethnicity. The centre itself grew out of five University departments coming together and that disciplinary breadth is central to our vision. When one of our researchers, Prenika Anand, set out to explore freedom from psychological harm in the context of human health, the centre was able to put together a supervisory team made up of an AI academic, a mental health professor and a health scientist. If you want to study complex questions at the intersection of AI, psychology and human health, you need a truly interdisciplinary team. Another student, Kim Littler, expressed interest in industry-informed research, and her supervisory team was expanded to include an industrial supervisor from our partner organisation, Jaguar Land Rover.

Why diversity builds trust

There is a practical dimension to all of this. When we are asked why our findings should carry weight, or why our advice should be trusted, the answer lies in part in who is doing the work. A research community that reflects the diversity of the people it serves is better placed to ask the right questions, gather the right evidence, and make the case for safer AI with credibility and authority. That is what we are building at SAINTS.

Ibrahim Habli is a Director of the UKRI Centre for Doctoral Training in Safe AI Systems (SAINTS), and a Professor of Safety-Critical Systems at the University of York. SAINTS is the UK’s only multidisciplinary PhD programme focused solely on the safety of artificial intelligence.