Skip to content Accessibility statement

Staff Spotlight: Dr Sinem Getir-Yaman

News

Posted on Wednesday 26 November 2025

This month we're spotlighting the work of Dr Sinem Getir-Yaman, a CfAA researcher whose work in safety methodologies is helping to address the question of how autonomous systems operate safely in the real world. In this Q&A, Dr Getir-Yaman introduces her latest project focused on the safety and security of advanced AI and the need for closer industry collaboration. 
A woman in a green blazer and white t-shirt, with shoulder-length dark hair and glasses is holding a microphone and is talking to an unseen audience

1. Can you tell us about your areas of research? 

My research looks at how we can make artificial intelligence (AI) systems safe, reliable and trustworthy, especially when they are used in situations where mistakes can have real consequences. This includes AI used in autonomous robots, decision-making software, and large language models which now appear in many services and products.

I combine technical safety with human-centred considerations, such as legal expectations, ethical constraints and social values. I use formal methods—such as probabilistic verification—and software engineering techniques to analyse whether AI systems behave as expected, even when they face uncertainty or changing situations.

I also work on understanding and preventing possible failures, especially those that might happen because systems learned from incomplete or noisy data.

Overall, I aim to help ensure that advanced AI systems can be used responsibly and confidently in the real world.

2. What led you to start work in this area?

My interest began during my PhD in Berlin, where I studied how to make software-intensive systems more dependable. As AI technologies grew more powerful and widespread, I realised we were moving into a new era where traditional safety methods were no longer enough.

AI systems learn from data, adapt to new situations and interact with people and their environment in complex ways. This raised a question that has guided my work ever since:

How can we ensure that increasingly autonomous, data-driven systems behave safely and responsibly in the real world?

During my collaborations with NASA Ames, and later at the University of York, it became clear that answering this question requires bringing together ideas from formal methods, machine learning, software engineering and socio-technical understanding. This combination is essential for dealing with uncertainty, human-AI interaction and the wider societal impacts of AI.

These experiences motivated me to focus on developing new assurance methods that help us understand, measure and ultimately trust the behaviour of advanced AI systems in realistic settings

3. You recently started a new project at the CfAA. Can you tell us more about it?

The project is funded by UKRI and called the Safety and Security of AI Network (SSAIN). It focuses on the safety and security of advanced AI systems and is supervised by CfAA Director, Professor John McDermid. One of our main goals at the CfAA is to help organisations make informed decisions when deploying AI, by giving them clearer and more reliable evidence about how safe the system is and under what conditions it can be trusted. SSAIN is one of the ways we can help achieve this.

In my work on SSAIN I aim to:

  • Develop conceptual and practical methods to assess the safety and security of AI systems.
  • Understand how different types of risks—such as harmful behaviour, security vulnerabilities or incorrect decisions—can interact with one another.
  • Study emerging AI technologies, like large language models (LLMs) and agentic AI, and identify where new kinds of failures or threats could arise.

A major part of this work involves close collaboration with colleagues across the CfAA, especially those working in areas such as maritime autonomy, robotics and transport systems. These domains present unique real-world challenges—such as operating in unpredictable environments or interacting with humans and other vessels—which help us test and refine assurance techniques in meaningful, practical settings.

4. What do you foresee as major challenges in the areas of safety and security of AI?

From my research and feedback from across different industry sectors, I think there are four primary challenges which affect safety and security of AI and autonomous systems. These are:

  • AI systems change over time unpredictably
    Unlike traditional software, modern AI systems can learn new patterns or update their behaviour. Ensuring they remain safe and secure as they evolve is very difficult. Operationalising requirements for AI-enabled systems—especially those related to human values, safety and security—is challenging, and assessing whether systems are fair, responsible or ethical is not always clear.
  • Large AI models can behave in unexpected ways
    Systems like large language models are incredibly powerful, but they can sometimes produce surprising or incorrect behaviour. They can also be vulnerable to security threats, which may lead to serious consequences. Understanding and controlling these behaviours remains an ongoing research challenge.
  • Safety, security and societal concerns are connected
    Addressing one issue can create risks elsewhere. For example, increasing system transparency can sometimes introduce security concerns. Balancing these competing needs requires new approaches.
  • AI systems are becoming more complex and more widely used
    As AI becomes deeply integrated into everyday life—from healthcare to transportation to education—we need assurance methods that scale with this growing complexity.

In short, we need safety and security methods that evolve just as quickly as AI itself.

5. Finally, where can we find you outside of the CfAA?

When I’m not at the CfAA, I’m often collaborating with colleagues in the Department of Computer Science or working with project partners across the UK and internationally.

Outside of work, I enjoy spending time with my family and making beautiful memories with my lovely 1.5 year-old daughter, Ada. I also love exploring York’s green spaces and discovering new coffee spots around the city. When the conditions allow, I enjoy sailing—it helps me disconnect and recharge!