Staff Spotlight - Professor John McDermid
Posted on Monday 15 December 2025
1. Can you tell us more about your role and your journey to the leadership of the Centre?
The journey was long – I first worked on safety critical software around 40 years ago, initially applying static analysis to a 5,000-line program and, at a similar time, investigating the fatal failure of a computer-controlled medical device. Working on safety-critical software turned out to be a good strategic choice for a research career since systems kept becoming more complex and gaining greater authority, ultimately reaching the early stages of autonomy which we see today so always posing new challenges. As for my role, it’s multifarious:
- It’s outward-facing – seeking to shape national and international policies – currently through the Department for Transport on autonomous vehicles and the International Maritime Organisation on autonomous vessels. It’s also about making connections; seeking collaborators and funding, especially large-scale, strategic grants such as those from the Lloyd’s Register Foundation that support the Centre.
- Internally, it’s supporting my colleagues and research students to develop and grow their own capabilities. For example, helping them become independent as researchers or progress to more permanent academic posts. I also maintain an oversight of strategy, top-down – trying to make sure we ask the right questions – although I recognise that the best ideas and innovations often come bottom-up, from those immersed in the details.
- More generally it’s to sustain and promote our distinctive approach to safety and assurance, seeing safety as an enabler of innovation, and developing principled but pragmatic approaches to complex real-world problems. I believe it's this ethos that motivates Lloyd’s Register Foundation, JLR, DNV and others to work with us.
2. You’ve been involved in the safety of complex systems for over four decades, what do you think have been the three most significant changes in this field?
Only three?? It has really been a process of continual change. For example the use of safety (or assurance) cases becoming much more widespread throughout the period, and our goal-structuring notation (GSN) being used in many parts of the world and in many industries. But as you ask, I would cite three step changes over the last ten years:
- Autonomy, where systems can operate independently of human control and it is difficult to demonstrate the ability of systems to operate safely in all situations (edge cases) they might encounter, where traditionally we would rely on human ingenuity to respond. An example of this is re-routing air traffic over the Atlantic in response to 9/11 – for which there was no pre-planned approach and no precedent.
- Use of AI, especially machine learning (ML) where the uncertainty and lack of transparency (opacity) need to be controlled or reasoned about explicitly, to assure systems which depend on ML for building world models or for decision-making.
- Emergent behaviour, specifically from systems-of-systems, e.g. 'swarms' of drones, where it is necessary to ensure and assure safety with only partial observability, hence imperfect situational awareness, and limited controllability.
Along with this is a growing challenge in terms of the ability to achieve and assure safe and effective human-system interaction – not a step change, but something that shows the need for a much greater emphasis on human factors in our research.
3. You’ve acted as a government advisor, worked with regulators and advised global organisations on the safe adoption of autonomy and AI, what common threads exist and how can we leverage this commonality to tackle issues around regulation and governance?
Everyone I’ve dealt with in government is striving to do a good job but in relation to technology that is, in almost all cases, far beyond their skills and experience. They are also faced with political pressures for “simple solutions” where none exist and are vastly under-resourced for the scale of challenge, e.g. the number and variety of systems, with which they must deal. I think we can best address this in multiple related ways:
- Develop domain agnostic methods (which we are, e.g. SACE and ALMAS) and then collaborate with partners to adapt them for specific domains, using the right terminology and concepts (something we’ve started to do, e.g. in automotive, healthcare and maritime).
- Work in terms of principles – and thus make things “as simple as possible, but no simpler” (reportedly this adage is due to Einstein) as we are with approaches such as PRAISE.
- Seek conceptual clarity. A lot of activities I see are ineffective and/or inefficient due to failure to state and communicate things clearly (perhaps we should adopt Wittgenstein’s “What can be said at all can be said clearly” as a guiding principle?). I believe that the CfAA’s ability to provide clarity stems from our interdisciplinarity, not least the discipline of our philosophical colleagues.
- Help individuals and organisations to see where lessons can be learnt from other domains, e.g. from air traffic control for remote operation of ships, and point out the perils of over-generalisation, e.g. that the concept of an operational design domain from automotive whilst appealing cannot be directly applied in the maritime sector.
4. What, in your view, is the role of the CfAA in addressing the emerging challenges around autonomous systems in society?
I believe we can act as convenors for discussions on critical topics and in important domains where we have the credibility and knowledge. This doesn’t mean we know everything – but that we know enough to be able to guide discussions between a wide range of stakeholders to help the community see ways to address the most critical challenges. Also, with the group’s 40-year pedigree, we continue to be trusted by government, regulators and companies. So, what we can do for society is to do what we’re best at; research, and translating that research into useful advice and guidance for the community.
Doing this requires a degree of knowledge and expertise but also some humility. Ultimately, I believe we will achieve more by being the trusted, or honest, broker willing to learn from others rather than by asserting that we know the answer to everything (and hopefully that will set a good example to the wider community).
5. Finally, where can we find you outside the CfAA?
If not in the garden or gym, then reading or listening to music where I have rather eclectic tastes. Being of that vintage, I quite enjoy 1960s and 1970s rock (Cream, Peter Green’s Fleetwood Mac, Pink Floyd, Yes), jazz (Erroll Garner, Snarky Puppy and many others), and classical (too many to mention). I used to try to read philosophy but now don’t have the time to do more than dabble but enjoy detective stories (Inspector Cao – a sort of Chinese Morse) and some other Asian authors, e.g. Haruki Murakami, as well as many British crime novelists including PD James and Dorothy L Sayers. Should I ever retire, I’d like to find the time to learn more about making cocktails or perhaps build a robot bartender as a retirement project!