Skip to content Accessibility statement

Blog

Discover our latest views and opinions on everything related to assuring AI, robotics, and autonomous systems.

If you'd like to write a blog post for us please contact Sara Thornhurst, Communication and Impact Manager: sara.thornhurst@york.ac.uk 

Our latest posts from our team of academics, researchers and partners are below.

News

25 July 2025

In the first of this two part blog series, Dr Victoria Hodge and Dr Philippa Ryan explore the use of autonomous mobile robots for solar panel inspection, and discuss the importance of a dedicated robotics platform to enable reliable monitoring for expanding solar farms.

News

24 June 2025

The decommissioning of nuclear facilities presents a number of growing challenges and risks. In this blog, Research Associates Dr Calum Imrie and Dr Ioannis Stefanakos review a recent project which tackled one of these key challenges.

News

23 June 2025

As both our Research and Innovation Fellow and Business Development and Delivery Manager at the CfAA, Dr John Molloy is heavily involved in working with our industry partners, particularly in the maritime space. In this spotlight he shares how his understanding of engineering challenges is helping shape our approach to safety assurance.

News

28 May 2025

As our Director of Strategic Programmes, Dr Ana MacIntosh is responsible for developing and overseeing a portfolio of major programmes under the umbrella of the CfAA. In this spotlight she shares how her role helps shape the current and future work of the CfAA in this prominent and fast-moving field.

News

25 April 2025

Research Associate, Dr Sepeedeh Shahbeigi, works at the intersection of AI and safety, focusing on how we can bridge the gap between theoretical safety concepts and practical implementation in real-world autonomous systems.

News

26 March 2025

In the last 12 months, the CfAA’s work in the maritime sector has revealed a significant shift in how the maritime industry is approaching autonomy and its confidence in its ability to adopt such technologies safely. In his latest blog, Dr John Molloy, CfAA’s Business Development and Delivery Manager explores what the next phase of adoption may look like.

News

24 March 2025

Dr Yan Jia is a lecturer in AI Safety and part of the CfAA research team. In our latest staff spotlight, she explains more about her work in safety assurance and she brings together her niche background in safety-critical healthcare applications to advance this important area of research.

News

19 February 2025

Dr Nathan Hughes’ work into understanding how people make decisions with technology is crucial for building upon the research into human-centred autonomy undertaken at the CfAA. Learn why their investigations into how and why humans make decisions is so important in designing safe AI systems across distinct domains.

News

17 February 2025

Have you ever paused to consider the sheer number of decisions made every time we get behind the wheel? What drives these decisions? How do we adapt to diverse conditions? In this blog PhD student, Hasan Bin Firoz, explores why decision making for autonomous driving remains an unsolved challenge.

News

20 January 2025

Early career researcher Dr Ioannis Stefanakos works on developing techniques/methodologies enabling the safe and effective use of autonomous and self-adaptive systems that are deployed in a range of applied settings including hospital emergency departments and domestic environments. He shares some insightful tips for PhD students and why his work provides a deep sense of fulfilment.

News

18 December 2024

The latest findings from the Future of Life Institute’s review into the safety practices of six leading AI developers showed low scores across the board. Our Director, Professor John McDermid explains why this result is not unexpected and what we need to consider to enable the safe deployment of Frontier AI models.

News

25 November 2024

Lloyd’s Register Foundation Senior Research Fellow Dr Ryan reflects on how her fellowship allows her to explore different safety applications, and the ever-important and evolving relationship between the safety world and the AI landscape.

News

25 November 2024

Forests are increasingly important as we attempt to mitigate the effects of climate change and prevent further loss of habitat for animals and plants across the globe.

News

22 October 2024

Dr Hodge is a Centre Research Fellow with an accomplished and multifaceted career. Here, she tells us how this has proven to be a valuable asset in her work, and how multidisciplinary collaboration with a broad range of people continuously provides her with fresh perspectives for both her research and ways of working.

News

27 September 2024

Professor Sujan joins the Centre as Chair in Safety Science. He tells us what first inspired him to work in Human Factors and safety science, and the most important challenges for AI technologies and their use - now and in the future.

News

4 September 2024

The integration of Artificial Intelligence (AI) into the healthcare sector affects everyone from patients and clinicians, to regulators and healthcare providers. Understanding the context and diverse risks of such an integration is critical to the long-term safety of patients and clinicians, and to the success of AI.

News

20 August 2024

At the recent Festival of Ideas held in York our academics and researchers talked to over 120 members of the public about the current advances in AI and explored the hype and reality of AI technologies. The Festival of Ideas is a fantastic way for us to engage directly with the public and share the important research our experts do into safety assurance of AI and machine learning-enabled systems.

News

4 July 2024

Our Chair of Systems Safety and Business Lead for the Centre for Assuring Autonomy shares a little on his background, his hopes for future opportunities and the top three things companies need to know about safe AI.

News

6 June 2024

For early career researchers (ECR) attending conferences and seminars is a great way to network and learn about different perspectives and progress around their specific area of research. In this blog research associate Gricel Vazquez shares her experience of a recent seminar.

News

13 May 2024

In the final part of his three part blog series, Research and Innovation Fellow Dr Kester Clegg discovers the problem is essentially that our system instructions are getting ignored by GPT-4.

News

7 May 2024

In the second instalment of his three part blog series, Research and Innovation Fellow Dr Kester Clegg delves deeper.

News

30 April 2024

In the first of this three part blog series, Research and Innovation Fellow Dr Kester Clegg explores Large Language Models’ (LLM) ability to ‘explain’ complex texts and poses the question of whether their encoded knowledge is sufficient to reason about system failures in a similar way to human analysts.

News

15 March 2024

In this blog, Programme Fellow, Tarek Nakkach, looks at changes to the regulatory landscape in the Middle East since 2021 and how developments in AI technology are impacting regulation and policy. 

News

31 January 2024

Multidisciplinary working has always been a core pillar of the AAIP’s research efforts. In this blog Dr Jo Iacovides from the University of York and Preetam Heeramun from NATS discuss the benefits of multidisciplinary projects and how they can open up new opportunities in safety-critical domains.

News

24 January 2024

We recently welcomed Vladislav Nenchev from BMW as an AAIP Programme Fellow. His fellowship centers on pushing the boundaries of automatic verification methods for safety and reliability of Automated and Autonomous Driving Systems. In this technical blog he sets out the hurdles and possible solutions for safe autonomous driving.

News

15 December 2023

In November we hosted the final presentations of our hackathon challenge alongside Oxford Robotics Institute as part of our work to safely assure the use of autonomous aerial vehicles in mines. This is the culmination of a series of events and workshops held throughout 2023 to establish how autonomous systems can assist human endeavour and reduce risk to life in challenging and dangerous environments.

News

8 November 2023

2023 has been a big year for AI governance. Research Fellow, Dr Zoe Porter, explores three key reflections arising from these developments and what they can tell us about the future of safe AI.

News

5 October 2023

Dr Richard Hawkins, Senior Lecturer in Computer Science explores how assuring the use of AI may help in the fight against wildfires.

News

10 August 2022

How multidisciplinary collaboration is improving the safety of an AI-based clinical decision support system

News

25 July 2022

In the final of three blog posts, AAIP Fellow Simon Smith highlights some of the questions we need to consider to advance regulatory frameworks for the safe introduction of RAS and some of the research being undertaken to answer them

News

8 June 2022

In the second of three blog posts, AAIP Fellow Simon Smith identifies six emerging trends and how they signpost the direction that regulation of RAS should be taking

News

18 May 2022

In the first of three blog posts, AAIP Fellow Simon Smith considers how regulatory frameworks could evolve to drive innovation and enable the safety assurance of systems that continuously adapt to their environments

News

15 July 2021

Author Professor Simon Burton concludes his series of blog posts by considering automated driving as a complex system, and proposing recommendations for the automotive industry to consider

News

3 February 2021

Moving towards safe autonomous systems

News

3 February 2021

Data that reflects the intended functionality

News

25 January 2021

This is my final article in this series; a great way to start the New Year! This article, in a way, is the practical conclusion of my research and so I will discuss my recommendations for law, policy and ethics for the United Arab Emirates (UAE).

News

10 December 2020

The role of human factors in the safe design and use of AI in healthcare

News

2 December 2020

Products, systems and organisations are increasingly dependent on data. In today’s Data-Centric Systems (DCS) data is no longer inert and passive. Its many active roles demand that data is treated as a separate system component.

News

16 November 2020

I will compare the UAE liability regime to others, in particular the European Union regime and its approach to the liability of autonomous systems.

News

21 October 2020

In this second post, I discuss the remedies available to a person who has suffered harm by an autonomous system.

News

21 September 2020

The main point of law is the following: Who is liable when an autonomous system causes injury or death to a person or damage to property?

News

11 August 2020

The term “AI safety” means different things to different people. Alongside the general community of AI and ML researchers and engineers, there are two different research communities working on AI safety.

News

30 June 2020

This is the last in a series of blog posts exploring the safety assurance of highly automated driving to accompany a new AAIP report which is free to download on the website.

News

29 June 2020

This is the fifth in a series of blog posts exploring the safety assurance of highly automated driving to accompany a new AAIP report which is free to download on the website.

News

22 June 2020

This is the fourth in a series of blog posts exploring the safety assurance of highly automated driving to accompany a new AAIP report which is free to download on the website.

News

19 June 2020

The idea that the driver is integral to vehicle control is fundamental to the automotive functional safety risk model. So what happens when we introduce autonomy?

News

15 June 2020

This is the third in a series of blog posts exploring the safety assurance of highly automated driving to accompany a new AAIP report which is free to download on the website.

News

8 June 2020

This is the second in a series of blog posts exploring the safety assurance of highly automated driving to accompany a new AAIP report which is free to download on the website.

News

1 June 2020

This is the first in a series of blog posts exploring the safety assurance of highly automated driving to accompany a new AAIP report which is free to download on the website.

News

7 April 2020

The societal benefits of autonomous vehicles (AVs) — those that operate fully without a driver — have never been clearer than they are right now. The delivery of essential items, such as medical supplies and food, with limited human contact, would serve us well in the current Coronavirus climate. While the development of AV performance is key, we must not neglect the assurance of their safety.

News

1 April 2020

Consider an intensive care unit (ICU) where clinicians are treating patients for sepsis. They must review multiple informational inputs (patient factors, disease stage, bacteria, and other influences) in their diagnosis and treatment. If the patient is treated incorrectly who do you hold responsible? The clinician? The hospital? What if an aspect of the treatment was undertaken by a system using Artificial Intelligence (AI)? Who (or what) is responsible then?

News

8 November 2019

Boeing may never have conceived the Boeing 737 's Manoeuvring Characteristics Augmentation System as an autonomous system. In effect, however, it was autonomous: it took decisions about stall without involving the pilots.

News

17 September 2019

Human control of artificial intelligence (AI) and autonomous systems (AS) is always possible — but the questions are: what form of control is possible, and how is control assured?

News

30 August 2019

Safety and security are two fundamental properties for achieving trustworthy cyber physical systems (CPS). It is generally agreed that the gap between the two has become less clear, but the relationship between them is still inadequately understood.

News

8 March 2019

The recent news that Uber isn’t criminally liable for the fatal crash in Tempe raises interesting questions. Read our blog post of two halves – ethical and legal.

News

27 February 2019

Considering the intent of requirements in machine learning.

News

20 November 2018

Does machine learning give us super-human powers when it comes to perception in autonomous driving?

News

26 October 2018

Do advanced driver assistance systems (ADAS) that perform too well become dangerous by giving us a false sense of security?