Ibrahim Habli (Professor of Safety Critical Systems in the Department of Computer Science, Research Director of the Centre for Assuring Autonomy and Director of the UKRI AI SAINTS Centre for Doctoral Training at the University of York) on his hopes for the future of intelligent computer systems and on never giving up on your vision.

Can you tell us a little about your area of research?

Absolutely. I’m interested in the design and assurance of safety-critical systems. And by that, I mean computer systems and software-based systems whose failure could lead to harm.

If the systems, which are in everything from medical devices and aeroplanes to power plants and satellites, were to misbehave or to fail, the consequences could be significant. This might mean the loss of life, damage to the environment or damage to property. Or sometimes societal or moral harms, which we shouldn't tolerate, like bias and discrimination. 

How did you end up working on the safety of computer systems?

By accident! I wanted to become an economist. I wanted to make money and think about global problems in the complex world around us. But I was late to apply for my undergraduate degree so I ended up choosing computer science.

The first course was about programming and we were given a problem to address by writing a very short computer program. So I was looking at the problem but had no idea how to use the computer. I didn’t know anything about computers: my interest was in economic models and in people and how they behave. 

But I fell in love with programming, thanks to a highly engaging lecture by an outstanding professor. You can create instructions and have a dialogue with the computer. It was just brilliant. All you need is a machine. That’s it. The rest is just the limit of your imagination.

While I was at university, I ended up mingling with philosophers and people in humanities and literature. And I was always intrigued by the link between computing, the real world, complex societal problems and things like this. Because I’m also a practical person and I want to solve problems, I ended up in software engineering, first in industry and then back in academia here at York.

I came to the safety side of computer systems because of York’s expertise and history in the field. The appeal is a job where I get to do computer science, software engineering and safety and also consider the relevant ethical, social and legal issues. It’s the perfect job for me.

We talk a lot about an interdisciplinary approach in research, but for you it sounds like that approach has been with you since you were an undergrad?

Yes, absolutely. Because I’m interested in solving problems but I’ve always believed that you can’t do that in a silo. And the UKRI SAINTS Centre for Doctoral Training in AI Safety (CDT) is the culmination of my approach. 

I had the vision for what has become the SAINTS CDT more than a decade ago. I wanted to establish a multidisciplinary research centre where people from different disciplines come together to work on the safe and ethical deployment of intelligent systems. But I was just starting my career and no one listened.

But it’s what I’ve worked towards and I ignored those people who told me it would be hard. I stuck to what I thought was needed. And now we have SAINTS. And we haven’t compromised on who we are. From the start, I said, either we do it our way, reflect who we are, or it won’t work.

You’ve already touched on this, on how you’ve been led by what you’re passionate about, but can you tell us more about what you love about your research and your work?

Safety is an open loop problem and you rarely achieve closure with safety. In other words, unlike other problems where you prove certain properties and that's that, in safety, that’s harder to achieve. So the dialogue is ongoing because of the inherent uncertainty in safety.

That uncertainty is in our knowledge about the world - a world that is constantly changing. And therefore you have to be on the ball and keep reasoning and updating your belief about what's going on. 

This means we have to understand the systems, as well as the people who use them. 

The other thing that’s exciting about safety is that because it needs to be interdisciplinary, I have no option but to collaborate with others. The idea that you could leave me alone for two years and I'll sort it out? It doesn't work like that. 

The final thing is the way we do safety at the University of York. We make sure everyone who should be there is there. The engineer, the computer scientist, the ethicist, the lawyer, the philosopher, the machine learning specialist, the safety expert. We’re all here together. 

What impact do you want your research to have?

The very best result of my research, my approach, is for people to get the benefit of technology without its risk; for nothing bad to happen. And this is why safety is very hard to measure - because you're measuring the absence of negative impacts. 

This brings us back to safety’s open loop. But we can flip it on its side and say, let's try to come up with something tangible. So a tangible impact might be that key services like the NHS adopt some of our techniques to develop their safety evidence and they have a more informed debate about it using publicly available safety cases that our researchers have advocated for and our partners pushed for. 

How hopeful are you about the future of your area of research?

Before we secured the funding for the SAINTS CDT and the approach we’re taking there, I was concerned. We’ve got a shortage of professionals who are active in the area of safe AI and intelligent computer systems. Plenty of people work on the development of the technology and the next best algorithm, but not the safety.

My colleagues and I invested a lot of time in SAINTS because we could have spent the next five years developing the next big safety methodology but if there are no people to apply it, then what's the point? 

And that's why I'm now more hopeful. We’re giving SAINTS PhD students and researchers this well rounded understanding of safety, communication of safety and communication of risk. 

And here at York, we’re hopeful: it’s something to do with the place. York as a university with our commitment to public good - safety is a public good. The way we do it in York in a collegial, collaborative way is essential for safety, but it’s also the University itself, because it's social justice; doing good together. It's as if it’s in the air in York.

Find out more: