
Our team
Academic team
John McDermid has been Professor of Software Engineering at the University of York since 1987 and twice served as Head of Department. He founded the High Integrity Systems Engineering (HISE) research group, and has researched, taught and consulted on the safety of complex computer-based systems internationally over the last 30 years. He originated the Goal Structuring Notation (GSN) now used widely for the structuring of safety cases and has been involved in a number of projects adapting classical safety methods for computer systems and software. He also helped develop a number of safety standards, including CAP 670 SW01 used in air traffic management, and DS 00-560 and 550 used in defence.
Professor McDermid’s research interests include the safety of robotics and autonomous systems and he has worked in a number of domains, particularly in aerospace. He continues to work on enhancing software engineering and safety practices in civil aerospace, for example through the SECT-AIR and ENCASE projects funded through the Aerospace Technology Institute (ATI). He is also involved in a range of projects in the maritime domain, including work on submarine systems. He has a long-standing collaboration on railway signalling with Beijing Jiaotong University, and collaborates with a number of other research, industrial and government organisations world-wide.
He was Head of the Department of Computer Science at the University of York from 2006 to 2012 and was elected a Fellow of the Royal Academy of Engineering in 2002. He was awarded an OBE in 2010.
Visit John McDermid's profile on the York Research Database to see publications, projects, collaborators, related work and more.
Dr Rob Alexander is a Senior Lecturer in the High Integrity Systems Engineering (HISE) group in the Department of Computer Science at the University of York. His main research focus is automated testing, with a particular interest in the safety and security validation of autonomous robots. He is currently supervising research projects on mutation testing in model-driven engineering, justifying the dependability of safety decision support systems, and automated robot testing using simulation and metaheuristic search. He has published over 50 papers on these and related topics and is currently chairing the Safety-Critical Systems Club’s working group on the safety of autonomous systems.
Radu Calinescu is a Professor of Computer Science at the University of York. He is the Safety of AI theme lead for AAIP, principal investigator (PI) on the UKRI Trustworthy Autonomous Systems Node in Resilience, and the lead of the Trustworthy Adaptive and Autonomous Systems and Processes (TASP) Research Team. He holds a DPhil in Computation from the University of Oxford, and was awarded a British Computer Society Distinguished Dissertation Award for his doctoral research. His research team is undertaking leading research on the use of mathematical modelling and analysis to improve the safety and performance of self-adaptive systems and processes. He has published over 120 research papers, has been a PI on projects funded by £1.75M of Dstl, Lloyd’s Register Foundation, EPSRC and Wellcome Trust grants, and led industrial R&D projects as the Senior Development Manager and Technical Architect of an Oxford University spin-off software company. He chaired or served on the Programme Committees of over 80 international conferences and workshops, and is a member of the editorial board of Springer’s Computing journal, for the area of autonomic, adaptive and dependable computing.
Ibrahim Habli is Deputy Head of Department of Computer Science (Research) at the University of York. His research interests are in the design and assurance of safety-critical systems, with a particular focus on intelligent systems (e.g. autonomous and connected driving) and Digital Health (e.g. ePrescribing and self-management apps). In 2015, he was awarded a Royal Academy of Engineering Industrial Fellowship through which he collaborated with the NHS on evidence-based means for assuring the safety of digital health systems.
Ibrahim’s work is highly interdisciplinary, with active collaborative links with clinicians, health scientists, economists and ethicists. His research is empirical and industry-informed, with collaborative projects with organisations such as Rolls-Royce, NASA, Jaguar Land Rover and NHS Digital. Ibrahim has led/co-led research programmes funded by the EPSRC, the Royal Academy of Engineering, the EU and industry. He has been a member of several international and national safety standardisation committees (e.g. DO178C, MISRA and BSI).
Ibrahim mainly teaches on York’s postgraduate programmes in Safety-Critical Systems Engineering and supervises several PhD students (including practising engineers).
Richard Hawkins is a Senior Research Fellow in the Assuring Autonomy International Programme, where he is investigating the assurance and regulation of robotic and autonomous systems.
Prior to joining the Programme he was a Lecturer in the High Integrity Systems Engineering (HISE) research group at the University of York where his research focused on the safety and assurance of software systems and the development of assurance cases. He previously worked as a Software Safety Engineer for BAE Systems and as a safety adviser in the nuclear industry.
Mark is a dedicated educator with over 20 years of experience developing and delivering industrially oriented teaching both on Award-bearing (PGCERT, PGDIP, MSc) programmes and non-credit bearing Continuing Professional Development of staff involved in system safety. He is the educational lead for the Assuring Autonomy International Programme and a member of the University of York's Teaching Committee.
He leads the IET / BCS accredited Safety-critical Systems programmes at York. Dedication to education reaches beyond practice to developing the teaching and academic skills of colleagues at the University of York through involvement with professional development programmes as both supervisor and chair Board of Studies.
The education provided is highly industrially relevant based on research and consultancy activities, as well as best industrial practice. He was the European editor of ARP 4754a Civil Aerospace safety recommended practice. He is currently involved in creating competency guidelines for those involved in analysing and assessing the impact of digital security issues in Safety-Critical Systems. Data is a core enabling technology for autonomy so he is also researching data safety issues.
Researchers
Kester Clegg is a research associate in the Assuring Autonomy International Programme, where he has a joint role teaching and researching the assurance of robotics and autonomous systems. His research interests include the use of explainable artificial intelligence (XAI) for the assurance of machine learning.
Prior to joining the AAIP, he was a research associate in the High Integrity Systems Engineering group at the University of York, working on a variety of safety-related critical systems. His research career spans over 20 years. Often working closely with industrial partners, such as Rolls-Royce and BAE, Kester has worked on projects as diverse as model-based development of failure logic for civil aviation gas turbines, geofencing safe corridors in rapidly reconfigurable factories and using machine search to expose risk in air traffic control sectors. Prior to his research career, he worked as a software developer on train timetabling software.
Kester obtained his PhD (Analogue Circuit Control through Gene Expression) in 2008 and an MSc at the University of York in Computer Science, following on from an MA in Linguistics at Durham University.
Jonathan is a Research Software Engineer in the Assuring Autonomy International Programme, with a focus on safety and recoverability of autonomous mobile platforms. He joined the AAIP after completing his Master’s degree in Data Science and Artificial Intelligence at the University of Hull. He is particularly interested in how robotics and autonomous systems can be implemented into our daily lives to improve and ease the stress of living, particularly for elderly and disabled individuals.
Dr Victoria Hodge is a Senior Technical Specialist (Software) in the AAIP. A computer scientist and software developer, Victoria’s research and software development focuses on artificial intelligence (AI), machine learning, data mining and anomaly detection across a variety of domains.
Victoria has authored over 60 publications covering: Machine learning and AI; neural networks; anomaly detection; data analytics; data processing frameworks; and information retrieval.
She holds a PhD in Computer Science from the University of York.
She has worked in industry as a software architect for medical diagnostics products; and as a software developer on applications including condition monitoring in industrial environments, anomaly detection in large data sets, and deep reinforcement learning for robot navigation.
Nathan is a Research Associate in the Assuring Autonomy International Programme, with a research focus on the design of human-centred explainable AI for autonomous healthcare systems. He joined the AAIP from the Department of Computer Science, where his PhD focused on the video game user experience when both player and game share autonomy over what interactions occur. He is particularly interested in how users experience automated technology, how they implement their goals, and how users describe their interactions. He has previously applied these interests to the aviation sector, and holds an MPsych in Psychology from the University of York.
Calum Imrie is a Research Associate at the Assuring Autonomy International Programme, with his particular research focus on the safety and continuous assurance of robotics and autonomous systems. This is part of the wider aim to bridge the gap between academia and real-world applications.
Calum completed his PhD at the University of Edinburgh, within the Edinburgh Centre of Robotics. The primary focus of Calum's thesis was swarm robotics titled: Robustness of Interaction Control in Robot Swarms. During his studies, Calum has also investigated low-level controllers, emotional robotics, reinforcement learning, and information theoretic approaches to robot control.
Dr Yan Jia is a research fellow with the AAIP. Her work focuses on safety management in healthcare, unifying and integrating work in traditional safety engineering and in machine learning. Initial work focused on medication management for patients at risk of atrial fibrillation. More recently she has worked on the treatment of sepsis, using Reinforcement Learning (RL) in recommending treatments, showing how the use of established safety methods can inform the development of the RL elements of the system and support production of a safety case. Recent work has enhanced some of the explainability methods for ML and investigated their role in safety assurance for healthcare.
Dr John Molloy joined the AAIP as a Research Fellow in 2020. His work includes the assurance of understanding and perception suites in autonomous systems, and the effects of environmental factors on sensor performance.
Previously, he spent ten years in the Electromagnetic Technologies Group at the National Physical Laboratory (NPL), where he was technical lead for autonomous systems, and future communications. He delivered several projects on perception in vehicular and marine autonomous systems including two reports on behalf of CCAV. He provided technical consultancy for external customers such as Lloyd’s Register, Osram GmbH and the USAF.
From 2015-2020 he was visiting research scientist at the Hyper Terahertz Facility, University of Surrey, providing technical expertise on the development of high-power laser and electro-optic systems for remote sensing and quantum applications.
John received his Eng.D in Applied Photonics from Heriot-Watt, Scotland (2016) for work on Nonlinear Optics and development of solid-state THz Optical Paramedic Oscillators (OPOs). He received his MSc Photonics & Optoelectronic Devices from St. Andrews & Heriot-Watt, Scotland (2010) and his BSc (Hons) Physics & Astronomy, NUI, Galway, Ireland (2006)
Previously in industry, he worked as a laser engineer, manufacturing diode-pumped solid-state lasers, and as an electronic engineer, developing high-speed high-capacity storage solutions for digital media.
Matt is a Research Fellow in the Assuring Autonomy International Programme, where he is currently researching software safety assurance in AI, and the safety of decision-making in autonomy. Prior to joining the AAIP, Matt worked for many years as an independent consultant in assuring the safety of software in complex, socio-technical systems.
Matt’s PhD research involves identifying the impediments to recognised good practice in software safety assurance; with the aim of identifying, characterising, and eradicating the impediments to the adoption of best practice for software safety assurance
Zoe is a Research Associate in the Assuring Autonomy International Programme, with a research focus on the ethics and governance of autonomous systems. She joined the AAIP from the Department of Philosophy, where she did her PhD on moral responsibility for unforeseen harms caused by autonomous systems.
Zoe was previously Chief Speechwriter at the Equality and Human Rights Commission, and she has also worked for members of the UK and European Parliaments. Zoe holds an MA in Philosophy from the University of London, and a BA from the University of Oxford.

Ioannis Stefanakos is a Research Associate in the Assuring Autonomy International Programme and a member of the Trustworthy Adaptive and Autonomous Systems & Processes (TASP) Research Team. His main research focus is modelling and verification of human-robot interactions in different settings (e.g., manufacturing, healthcare) for ensuring safety in the adaption of robot configuration and behaviour in run-time changes.
Ioannis recently completed his PhD at the University of York with his thesis titled ‘Software Analysis and Refactoring Using Probabilistic Modelling and Performance Antipatterns’. His work focused on the analysis and refactoring of software systems that must comply with strict nonfunctional requirements.
Programme management and administration
Chrysle took up her role as Programme Administrator in December 2017. Previously, Chrysle was the Head of Department’s PA in Computer Science.
Sara manages the communication and media for the Assuring Autonomy International Programme. She joined the team in July 2023. A Chartered PR professional, Sara previously ran her own agency and has worked with start ups and spin outs in the renewables and robotics industries.
Research students
John is a part-time PhD research student. His research is within the area of the safety assurance challenges of machine learning systems, with a focus on the cost and reward mechanisms associated with reinforcement learning. John is a recent graduate of our MSc in Safety-Critical Systems Engineering and also holds a MEng (Hons) in Systems Engineering from Loughborough University. John is a chartered engineer and works full-time in industry as a product safety specialist and technical expert in software safety. John is also a member of several safety-related bodies including MISRA C++, and the SCSC’s Data Safety Initiative and Safety of Autonomous Systems working groups.
Brendan Devlin-Hill is a PhD student in the AAIP programme. After graduating from Physics at UCL in 2020, he joined the programme to begin his PhD on robotic maintenance systems for nuclear fusion power plants. This is a highly intersectional project, implementing elements of computer science, plasma physics, and fusion engineering. Brendan's research interests include autonomous robotics, machine learning, space and plasma physics, and the practical realisation of domestic fusion power.
Jane Fenn is a part-time PhD student, funded by BAE Systems, with whom she has worked since 1989. She discovered a passion for Safety Engineering early in her career which she has nurtured through academic study and participating in academic research. Jane works within the Air Sector where she is responsible for Safety Engineering processes, staff competency and training. She works on several standards committees, including SAE G-34 – AI in Aviation. She is a Fellow of IET and SaRS, a BAE Systems Global Engineering Fellow and Licensed Technologist in Robotics and Autonomy Safety Engineering.
Josh is a Postgraduate research student on the Assuring Autonomy International Programme. Starting his academic career as an artificial intelligence student at Northumbria University, Josh quickly gained an interest in the real-world application of AI. While working on some casual projects, he started to study the interactions machine learning applications could have in regard to processing real-time data, rather than analysing back-end data. Upon finishing his undergraduate degree, Josh began searching for opportunities to further explore machine learning which led him to the AAIP in York, where he now studies how Artificial Intelligence can be used in the context of delivering safe and automated railway solutions.
Shakir is working as an Engineer in Healthcare where his main function is to research the safety assurance challenges associated with engineering and deploying Machine Learning (ML) in healthcare pathways. To assist his work, he has embarked on a PhD in Computer Science at the University of York. His research interests include system safety, causal taxonomies, requirements engineering, assurance cases, regulation, and image analysis via neural networks.
Shakir is a member of several research and special interest groups, such as the Safety of Autonomous Systems Working Group (SASWG), the British Standards Institution’s AI standards development committee (ART/1) and the Clinical AI Turing Interest Group. His expertise has given him the opportunity to work on demonstrator projects under the auspices of the Assuring Autonomy International Programme (AAIP). His most recent work involves developing safety assurance argument patterns for the project Safety Assurance FRamework for Machine Leaning in the Healthcare Domain (SAFR). Through these endeavours he regularly authors and contributes to organisational and academic publications.
His past work experience is in Software Engineering and he received his MSc in Computing in 2018.
Helen is a chartered engineer with over 20 years’ industrial experience, working in a product development environment. Helen’s career in system safety started with British Rail Research. She then worked at Jaguar Land Rover for 15 years as power train system safety technical specialist. In 2011 Helen joined Protean Electric, and in 2016 joined HORIBA MIRA as a chief engineer for functional safety. Helen has published a number of automotive functional safety related papers and promotes industry best practice through her work with MISRA, ISO and SAE groups. Helen is currently a member of the UK ISO 26262 functional safety for road vehicles group, a UK representative on the ISO 21448 Safety of the Intended Functionality (SOTIF) working group, and a MISRA Safety Case guidelines author. Helen is interested in the impact that highly automated driver assistance systems might have on the way industry develops and assures the safety of such vehicle features. As a doctoral research student at York Helen is researching controllability considerations for highly automated and connected vehicle safety.
Berk is from Turkey and is a full-time AAIP-funded PhD student in Computer Science at the University of York. He earned his Bachelor's degree in Industrial Engineering from both Istanbul Technical University (ITU) and Southern Illinois University of Edwardsville (SIUE) in 2019 and Master's degree in Industrial Engineering from SIUE. During his Master’s, he worked on data science, applied statistics, and optimization as a Teaching & Research Assistant. Berk’s current research project focuses on developing a safety-critical personalized AI-based support model targeting interventions for patients with Type 2 diabetes at the highest risk of developing multimorbidity.
Josh is a PhD Research Student on the Assuring Autonomy International Programme. His research is focussed on safety within multi-agent reinforcement learning primarily for robotic teams. Prior to joining the University of York Josh obtained an MSc in Advanced Computer Science from the University of Birmingham, focusing his work on intelligent mobile robotics and multi-robot systems in both theoretical and practical settings. Much of this work was built upon Josh's foundations within game theory and reinforcement learning, which was gained through a BSc in Computer Science (Games Development) at the University of Wolverhampton.
Mohammad Tishehzan is an Early Stage Researcher in ETN- PETER Project in the department of Computer Science at the University of York. As ESR11, he carries out research on “Modelling and Reasoning about EMI (Electromagnetic Interference) Interactions in Autonomous and Complex Vessel”. His primary goal is developing a through-life EMI risk-based modular safety case approach in a form suitable for all stakeholders in the industry. He obtained his master’s degree in electrical engineering (field and wave) in 2019. Before starting his PhD course in 2020, he had worked as EMC (Electromagnetic Compatibility) test engineer.
For 12 years, Bernard was the Electrical Superintendent for a Norwegian shipping company. He left and attended Loughborough University of Technology where he graduated with a degree in electro-mechanical power engineering. In 1993 he joined Lloyd’s Register, and in 2007 was invited to become the Global Head of Electro-technical Systems within the Technology Directorate. In 2000 Bernard was appointed as an Advisor to the UK Naval Authority for Power and Propulsion. Since 2017 Bernard has been responsible for the Regulatory Development activities within marine at Rolls-Royce Marine, focussing on the Maritime Autonomous Infrastructure. During his career Bernard has presented and published around 50 technical papers and articles on many aspects of marine electro-technology systems, and is an active member on a number of research and special interest groups. He sits on the technical committees of two leading classification societies, two international committees, and is a member of ISO, BSI and CIMAC. In 2018 Bernard took up an offer of a part-time PhD research post in the Department of Computer Science at the University of York.
Gricel is a PhD student in Computer Science at the University of York. Her research interests include formal methods, multi-robot systems (MRS), task allocation and planning, domain-specific languages for MRS, ethical concerns of autonomous systems and self-adaptive and critical systems. She received her MSc in Computational Intelligence and Robotics at the University of Sheffield.
Jie Zou is a full-time PhD student at the University of York, working with the AAIP Group and Real-Time and Distributed Systems Group. Her research focuses on Safety Driven Timing-Predictable and Resource-Efficient Autonomous Systems, especially Mixed-Criticality Systems. After earning her Bachelor degree in Electronic and Information Engineering from China Agricultural University in 2012 and Master degree in Electrical Engineering from Technische Universität Berlin in 2015, she worked as an assistant researcher and engineer focusing on smart perception for autonomous systems in the Center for Automotive Electronics, Shenzhen Institutes of Advanced Technology (SIAT) of the Chinese Academy of Science (CAS).
Marie Curie Fellows
Fang Yan is an early stage researcher under the Marie Sklodowska-Curie Innovative Training Network “SAS – Safer Autonomous Systems”. Her research interests are in the model-based assurance cases process for robotic and autonomous systems.
She is also enrolled as a PhD student at the University of York working with the AAIP group and RoboStar Group, on assurance case generation and verification technical solutions.
Fang holds a Master degree in aviation maintenance engineering from ENSICA, France. After that, she worked in the aviation software certification field for years to provide certification service, industry advisory, teaching, and training in China.
Programme Fellows
Organisation: University of Tehran, Iran
Title: PhD candidate in software engineering
Organisation: Defence Science and Technology Laboratory (Dstl), UK
Job title: Dstl Fellow
Read Rob's AAIP blog post, The Importance of Intent.
Organisation: Defence Science and Technology Laboratory (Dstl), UK
Job title: Senior Principal Scientist
Organisation: Fraunhofer Institute for Cognitive Systems IKS, Germany
Job title: Research Division Director for Safety
AAIP blog posts:
Organisation: Argo AI, Germany
Job title: Towards Safe Perception including Machine Learning for Level 4 Automated Driving
Organisation: Robert Bosch GmbH, Germany
Job title: Research engineer
Research papers linked to AAIP Fellowship
Blog post: Data suitability - Data that reflects the intended functionality
Organisation: University of Bremen
Job title: Postdoc in Verification of Robots and Autonomous Systems
Mario Gleirscher is a postdoctoral researcher and lecturer in Computer Science at the University of Bremen, Germany. He received the PhD and MSc degrees in Computer Science, with a minor in Mathematics, from the Technical University of Munich, Germany. He has a Foundation Degree in mechanical engineering with a focus on production engineering and collected several years of practical experience as a consultant, method engineer, and software developer. In his research, he applies formal methods to the analysis, verification, and synthesis of controllers for cyber-physical, autonomous, and robotic systems. For this research, he was awarded a Fellowship at the University of York, UK, funded by the German Research Foundation.
Organisation: University of Leeds, UK
Job title: Lecturer in Aerial Robotics (Assistant Professor)
Organisation: Oxford Robotics Institute (ORI), UK
Job title: Departmental Lecturer in Robotics
Organisation: Bradford Teaching Hospitals NHS Foundation Trust
Job title: Consultant Anaesthesia and Intensive Care and Head of Clinical Artificial Intelligence
Recent research papers with AAIP team:
- Jia, Y., Kaul, C., Lawton, T., Murray-Smith, R., and Habli, I. "Prediction of weaning from mechanical ventilation using convolutional neural networks" in Artificial Intelligence in Medicine
- Jia, Y., Lawton, T., Burden, J., McDermid, J., and Habli, I. "Safety-Driven Design of Machine Learning for Sepsis Treatment" in Journal of Biomedical Informatics, May 2021.
- Habli, I., Alexander, R., Hawkins, R., Sujan, M., McDermid, J., Picardi, C., and Lawton, T. "Enhancing COVID-19 decision making by creating an assurance case for epidemiological models", in BMJ Health and Care Informatics, September 2020.
Organisation: Macquarie University, Sydney, Australia
Job title: Professor of Biomedical and Health Informatics at the Australian Institute of Health Innovation
Organisation: Kyndryl, United Arab Emirates
Job title: General Counsel Middle East, Africa and Turkey
Blog posts on the liability of robotic and autonomous systems:
- Liability of autonomous systems under the UAE Civil Code
- Available remedies for injury or damage caused by autonomous systems
- A comparison with other regimes
- Recommendations for law, policy and ethics of AI for the United Arab Emirates (UAE)
Organisation: KU Leuven, Belgium
Job title: Head of Mechatronics Group (M-Group) and Associate Professor
Job title: Independent Consultant
Organisation: University of Brasilia, Brazil
Job title: Associate Professor in Computer Science
Organisation: CACI Limited, UK
Job title: Chief Architect
Organisation: Human Reliability Associates and Warwick Medical School
Job title: Senior Safety Consultant and Associate Professor of Patient Safety
Research papers linked to AAIP Fellowship and SAM demonstrator project
Information on SAM demonstrator project
Information on human factors in AI demonstrator project
Organisation: University of Liverpool
Job title: Lecturer (Assistant Professor)