Academic team

John McDermid

John McDermid has been Professor of Software Engineering at the University of York since 1987 and twice served as Head of Department. He founded the High Integrity Systems Engineering (HISE) research group, and has researched, taught and consulted on the safety of complex computer-based systems internationally over the last 30 years. He originated the Goal Structuring Notation (GSN) now used widely for the structuring of safety cases and has been involved in a number of projects adapting classical safety methods for computer systems and software. He also helped develop a number of safety standards, including CAP 670 SW01 used in air traffic management, and DS 00-560 and 550 used in defence.

Professor McDermid’s research interests include the safety of robotics and autonomous systems and he has worked in a number of domains, particularly in aerospace. He continues to work on enhancing software engineering and safety practices in civil aerospace, for example through the SECT-AIR and ENCASE projects funded through the Aerospace Technology Institute (ATI). He is also involved in a range of projects in the maritime domain, including work on submarine systems. He has a long-standing collaboration on railway signalling with Beijing Jiaotong University, and collaborates with a number of other research, industrial and government organisations world-wide.

He was Head of the Department of Computer Science at the University of York from 2006 to 2012 and was elected a Fellow of the Royal Academy of Engineering in 2002. He was awarded an OBE in 2010.

Visit John McDermid's profile on the York Research Database to see publications, projects, collaborators, related work and more.

‌‌

Dr Rob Alexander is a Senior Lecturer in the High Integrity Systems Engineering (HISE) group in the Department of Computer Science at the University of York. His main research focus is automated testing, with a particular interest in the safety and security validation of autonomous robots. He is currently supervising research projects on mutation testing in model-driven engineering, justifying the dependability of safety decision support systems, and automated robot testing using simulation and metaheuristic search. He has published over 50 papers on these and related topics and is currently chairing the Safety-Critical Systems Club’s working group on the safety of autonomous systems.

Radu Calinescu is a Professor of Computer Science at the University of York. He is the Safety of AI theme lead for AAIP, principal investigator (PI) on the UKRI Trustworthy Autonomous Systems Node in Resilience, and the lead of the Trustworthy Adaptive and Autonomous Systems and Processes (TASP) Research Team. He holds a DPhil in Computation from the University of Oxford, and was awarded a British Computer Society Distinguished Dissertation Award for his doctoral research. His research team is undertaking leading research on the use of mathematical modelling and analysis to improve the safety and performance of self-adaptive systems and processes. He has published over 120 research papers, has been a PI on projects funded by £1.75M of Dstl, Lloyd’s Register Foundation, EPSRC and Wellcome Trust grants, and led industrial R&D projects as the Senior Development Manager and Technical Architect of an Oxford University spin-off software company. He chaired or served on the Programme Committees of over 80 international conferences and workshops, and is a member of the editorial board of Springer’s Computing journal, for the area of autonomic, adaptive and dependable computing.

 

Ibrahim Habli is Deputy Head of Department of Computer Science (Research) at the University of York. His research interests are in the design and assurance of safety-critical systems, with a particular focus on intelligent systems (e.g. autonomous and connected driving) and Digital Health (e.g. ePrescribing and self-management apps). In 2015, he was awarded a Royal Academy of Engineering Industrial Fellowship through which he collaborated with the NHS on evidence-based means for assuring the safety of digital health systems.

Ibrahim’s work is highly interdisciplinary, with active collaborative links with clinicians, health scientists, economists and ethicists. His research is empirical and industry-informed, with collaborative projects with organisations such as Rolls-Royce, NASA, Jaguar Land Rover and NHS Digital. Ibrahim has led/co-led research programmes funded by the EPSRC, the Royal Academy of Engineering, the EU and industry. He has been a member of several international and national safety standardisation committees (e.g. DO178C, MISRA and BSI).

Ibrahim mainly teaches on York’s postgraduate programmes in Safety-Critical Systems Engineering and supervises several PhD students (including practising engineers).

Richard Hawkins is a Senior Research Fellow in the Assuring Autonomy International Programme, where he is investigating the assurance and regulation of robotic and autonomous systems.

Prior to joining the Programme he was a Lecturer in the High Integrity Systems Engineering (HISE) research group at the University of York where his research focused on the safety and assurance of software systems and the development of assurance cases. He previously worked as a Software Safety Engineer for BAE Systems and as a safety adviser in the nuclear industry.

Mark is a dedicated educator with over 20 years of experience developing and delivering industrially oriented teaching both on Award-bearing (PGCERT, PGDIP, MSc) programmes and non-credit bearing Continuing Professional Development of staff involved in system safety. He is the educational lead for the Assuring Autonomy International Programme and a member of the University of York's Teaching Committee.

He leads the IET / BCS accredited Safety-critical Systems programmes at York. Dedication to education reaches beyond practice to developing the teaching and academic skills of colleagues at the University of York through involvement with professional development programmes as both supervisor and chair Board of Studies.

The education provided is highly industrially relevant based on research and consultancy activities, as well as best industrial practice. He was the European editor of ARP 4754a Civil Aerospace safety recommended practice. He is currently involved in creating competency guidelines for those involved in analysing and assessing the impact of digital security issues in Safety-Critical Systems. Data is a core enabling technology for autonomy so he is also researching data safety issues.

Researchers

Dr Victoria Hodge

Dr Victoria Hodge is a Senior Technical Specialist (Software) in the AAIP. A computer scientist and software developer, Victoria’s research and software development focuses on artificial intelligence (AI), machine learning, data mining and anomaly detection across a variety of domains.

Victoria has authored over 60 publications covering: Machine learning and AI; neural networks; anomaly detection; data analytics; data processing frameworks; and information retrieval.

She holds a PhD in Computer Science from the University of York.

She has worked in industry as a software architect for medical diagnostics products; and as a software developer on applications including condition monitoring in industrial environments, anomaly detection in large data sets, and deep reinforcement learning for robot navigation.

Calum Imrie is a Research Associate at the Assuring Autonomy International Programme, with his particular research focus on the safety and continuous assurance of robotics and autonomous systems. This is part of the wider aim to bridge the gap between academia and real-world applications.

Calum completed his PhD at the University of Edinburgh, within the Edinburgh Centre of Robotics. The primary focus of Calum's thesis was swarm robotics titled: Robustness of Interaction Control in Robot Swarms. During his studies, Calum has also investigated low-level controllers, emotional robotics, reinforcement learning, and information theoretic approaches to robot control.

Dr Yan Jia 

Dr Yan Jia is a research associate at the University of York. Her work focuses on safety management in healthcare, unifying and integrating work in traditional safety engineering and in machine learning (ML). Her initial work focused on medication management for patients in ICUs. More recently she has worked on the treatment of sepsis, using reinforcement learning (RL) in recommending treatments, showing how the use of established safety methods can inform the development of the RL elements of the system and support production of a safety case. Her current work is enhancing ML explainability methods and investigating their role in safety assurance for healthcare.

Marten is a Research Associate in the Assuring Autonomy International Programme, with a research focus on the ethics and societal trust in artificially intelligent and autonomous systems. He joined the AAIP after working on his PhD in machine ethics which centred on the use of reinforcement learning to raise ethical machines. 
His current work is elucidating the concept of transparency in the context of artificially intelligent and autonomous systems, and in particular transparency's connection to safety assurance and its impact on enabling or impairing ethical principles.

Dr John Molloy joined the AAIP as a Research Fellow in 2020. His work includes the assurance of understanding and perception suites in autonomous systems, and the effects of environmental factors on sensor performance.

Previously, he spent ten years in the Electromagnetic Technologies Group at the National Physical Laboratory (NPL), where he was technical lead for autonomous systems, and future communications. He delivered several projects on perception in vehicular and marine autonomous systems including two reports on behalf of CCAV. He provided technical consultancy for external customers such as Lloyd’s Register, Osram GmbH and the USAF.

From 2015-2020 he was visiting research scientist at the Hyper Terahertz Facility, University of Surrey, providing technical expertise on the development of high-power laser and electro-optic systems for remote sensing and quantum applications.

John received his Eng.D in Applied Photonics from Heriot-Watt, Scotland (2016) for work on Nonlinear Optics and development of solid-state THz Optical Paramedic Oscillators (OPOs). He received his MSc Photonics & Optoelectronic Devices from St. Andrews & Heriot-Watt, Scotland (2010) and his BSc (Hons) Physics & Astronomy, NUI, Galway, Ireland (2006)

Previously in industry, he worked as a laser engineer, manufacturing diode-pumped solid-state lasers, and as an electronic engineer, developing high-speed high-capacity storage solutions for digital media.

Matt Osborne

Matt is a Research Fellow in the Assuring Autonomy International Programme, where he is currently researching software safety assurance in AI, and the safety of decision-making in autonomy. Prior to joining the AAIP, Matt worked for many years as an independent consultant in assuring the safety of software in complex, socio-technical systems.

Matt’s PhD research involves identifying the impediments to recognised good practice in software safety assurance; with the aim of identifying, characterising, and eradicating the impediments to the adoption of best practice for software safety assurance

Mike is a Research Fellow on the AAIP, looking at autonomy in complex environments. He is also the Safety-Critical Systems Club (SCSC) Director and Events Coordinator and leads the SCSC Data Safety Initiative and Service Assurance Working Groups. He has been in industry for many years; his first safety project was a launch tracking system for the Ariane space launcher.

Chiara is a Research Associate in the Assuring Autonomy International Programme. Chiara’s research interests include machine learning and the development of dynamic safety cases. In particular she is currently working on autonomous system safety.

Previously she was enrolled in a PhD at the Department of Electronics at the University of York. During her PhD she investigated the efficiency of the evolutionary algorithms in monitoring and diagnosing two neurological disorders. She has a Masters in Artificial Intelligence and Robotics and a Bachelor degree in Computer Engineering.

Zoe Porter

Zoe is a Research Associate in the Assuring Autonomy International Programme, with a research focus on the ethics and governance of autonomous systems. She joined the AAIP from the Department of Philosophy, where she did her PhD on moral responsibility for unforeseen harms caused by autonomous systems.

Zoe was previously Chief Speechwriter at the Equality and Human Rights Commission, and she has also worked for members of the UK and European Parliaments. Zoe holds an MA in Philosophy from the University of London, and a BA from the University of Oxford.

Dr Philippa Ryan is a research fellow with the Assuring Autonomy International Programme and the Assuring Responsibility for Trustworthy Autonomous Systems project at the University of York.
 
She has worked in the field of safety-critical software engineering for over 20 years, both in academia and industry. She has extensive research experience in software assurance, certification and safety cases. She previously worked in industry writing assurance cases and examining software safety, including for nuclear, defence, medical and AI/ML domains. She is currently chair of the Safety Critical Systems Club working group developing guidance for assuring autonomous systems. Her PhD examined safety analysis of operating systems and she is a chartered engineer.

Ioannis Stefanakos

Ioannis Stefanakos is a Research Associate in the Assuring Autonomy International Programme and a member of the Trustworthy Adaptive and Autonomous Systems & Processes (TASP) Research Team. His main research focus is modelling and verification of human-robot interactions in different settings (e.g., manufacturing, healthcare) for ensuring safety in the adaption of robot configuration and behaviour in run-time changes.   

Ioannis recently completed his PhD at the University of York with his thesis titled ‘Software Analysis and Refactoring Using Probabilistic Modelling and Performance Antipatterns’. His work focused on the analysis and refactoring of software systems that must comply with strict nonfunctional requirements. 

Programme management and administration

Dr Ana MacIntosh manages the £12M Assuring Autonomy International Programme supported by the Lloyd’s Register Foundation and the University of York.  Prior to joining the University of York, she established and managed Sheffield Robotics, one of the UK’s leading robotics research institutes, and she is a strong advocate for working across the boundaries of traditional disciplines.
 
Ana studied materials engineering and completed a PhD investigating the potential for using spider silk as a scaffold for tissue engineering, before working in project management and business development for organisations ranging from a micro-SME to a network of 11 universities.

Sarah is Partnership and Communications Manager for AAIP. In her current role, Sarah leads work to identify suitable global partners for the AAIP, developing mutually beneficial collaborations to accelerate the safe adoption of robotics and autonomous systems. She leads the communications for the AAIP and works closely with the Programme team and external stakeholders to identify key messages and information to support the Programme's aims to generate international impact. Sarah's background is in the charity and educational sectors, where she has led the development of major partnerships and collaborations including securing the support of Boots, Asda and the Football Association for the No Smoking Day campaign. Her work has spanned partnerships, stakeholder communications and engagement, marketing, and fundraising.

Chrysle took up her role as ‌Programme Administrator in December 2017. Previously, Chrysle was the Head of Department’s PA in Computer Science.

Research students

‌‌

John is a part-time PhD research student. His research is within the area of the safety assurance challenges of machine learning systems, with a focus on the cost and reward mechanisms associated with reinforcement learning. John is a recent graduate of our MSc in Safety-Critical Systems Engineering and also holds a MEng (Hons) in Systems Engineering from Loughborough University. John is a chartered engineer and works full-time in industry as a product safety specialist and technical expert in software safety. John is also a member of several safety-related bodies including MISRA C++, and the SCSC’s Data Safety Initiative and Safety of Autonomous Systems working groups.

Brendan Devlin-Hill is a PhD student in the AAIP programme. After graduating from Physics at UCL in 2020, he joined the programme to begin his PhD on robotic maintenance systems for nuclear fusion power plants. This is a highly intersectional project, implementing elements of computer science, plasma physics, and fusion engineering. Brendan's research interests include autonomous robotics, machine learning, space and plasma physics, and the practical realisation of domestic fusion power.

 

Josh is a Postgraduate research student on the Assuring Autonomy International Programme. Starting his academic career as an artificial intelligence student at Northumbria University, Josh quickly gained an interest in the real-world application of AI. While working on some casual projects, he started to study the interactions machine learning applications could have in regard to processing real-time data, rather than analysing back-end data. Upon finishing his undergraduate degree, Josh began searching for opportunities to further explore machine learning which led him to the AAIP in York, where he now studies how Artificial Intelligence can be used in the context of delivering safe and automated railway solutions.

Shakir is working as an Engineer in Healthcare where his main function is to research the safety assurance challenges associated with engineering and deploying Machine Learning (ML) in healthcare pathways. To assist his work, he has embarked on a PhD in Computer Science at the University of York. His research interests include system safety, causal taxonomies, requirements engineering, assurance cases, regulation, and image analysis via neural networks.

Shakir is a member of several research and special interest groups, such as the Safety of Autonomous Systems Working Group (SASWG), the British Standards Institution’s AI standards development committee (ART/1) and the Clinical AI Turing Interest Group. His expertise has given him the opportunity to work on demonstrator projects under the auspices of the Assuring Autonomy International Programme (AAIP). His most recent work involves developing safety assurance argument patterns for the project Safety Assurance FRamework for Machine Leaning in the Healthcare Domain (SAFR). Through these endeavours he regularly authors and contributes to organisational and academic publications.

His past work experience is in Software Engineering and he received his MSc in Computing in 2018.

‌‌

Helen is a chartered engineer with over 20 years’ industrial experience, working in a product development environment. Helen’s career in system safety started with British Rail Research. She then worked at Jaguar Land Rover for 15 years as power train system safety technical specialist. In 2011 Helen joined Protean Electric, and in 2016 joined HORIBA MIRA as a chief engineer for functional safety. Helen has published a number of automotive functional safety related papers and promotes industry best practice through her work with MISRA, ISO and SAE groups. Helen is currently a member of the UK ISO 26262 functional safety for road vehicles group, a UK representative on the ISO 21448 Safety of the Intended Functionality (SOTIF) working group, and a MISRA Safety Case guidelines author. Helen is interested in the impact that highly automated driver assistance systems might have on the way industry develops and assures the safety of such vehicle features. As a doctoral research student at York Helen is researching controllability considerations for highly automated and connected vehicle safety.

Berk is from Turkey and is a full-time AAIP-funded PhD student in Computer Science at the University of York. He earned his Bachelor's degree in Industrial Engineering from both Istanbul Technical University (ITU) and Southern Illinois University of Edwardsville (SIUE) in 2019 and Master's degree in Industrial Engineering from SIUE. During his Master’s, he worked on data science, applied statistics, and optimization as a Teaching & Research Assistant. Berk’s current research project focuses on developing a safety-critical personalized AI-based support model targeting interventions for patients with Type 2 diabetes at the highest risk of developing multimorbidity.

Tejas Pandey is a PhD research student on the Assuring Autonomy International Programme. He is a multi-domain researcher, bridging the gap between computer vision and safety and involved with the Vision, Graphics and Learning Group as well as the High Integrity Systems Research Group. His research is primarily focused on assuring the safety of autonomous vehicles. Previously, Tejas completed his Master's degree in Computer Science from Trinity College Dublin in graphics and vision technologies and was a Deep Learning researcher at Intel, working on robotics and computer vision solutions.

‌‌

Josh is a PhD Research Student on the Assuring Autonomy International Programme. His research is focussed on safety within multi-agent reinforcement learning primarily for robotic teams. Prior to joining the University of York Josh obtained an MSc in Advanced Computer Science from the University of Birmingham, focusing his work on intelligent mobile robotics and multi-robot systems in both theoretical and practical settings. Much of this work was built upon Josh's foundations within game theory and reinforcement learning, which was gained through a BSc in Computer Science (Games Development) at the University of Wolverhampton.

 

Mohammad Tishehzan is an Early Stage Researcher in ETN- PETER Project in the department of Computer Science at the University of York. As ESR11, he carries out research on “Modelling and Reasoning about EMI (Electromagnetic Interference) Interactions in Autonomous and Complex Vessel”. His primary goal is developing a through-life EMI risk-based modular safety case approach in a form suitable for all stakeholders in the industry. He obtained his master’s degree in electrical engineering (field and wave) in 2019. Before starting his PhD course in 2020, he had worked as EMC (Electromagnetic Compatibility) test engineer.

 

Bernard Twomey headshot

For 12 years, Bernard was the Electrical Superintendent for a Norwegian shipping company. He left and attended Loughborough University of Technology where he graduated with a degree in electro-mechanical power engineering. In 1993 he joined Lloyd’s Register, and in 2007 was invited to become the Global Head of Electro-technical Systems within the Technology Directorate. In 2000 Bernard was appointed as an Advisor to the UK Naval Authority for Power and Propulsion. Since 2017 Bernard has been responsible for the Regulatory Development activities within marine at Rolls-Royce Marine, focussing on the Maritime Autonomous Infrastructure. During his career Bernard has presented and published around 50 technical papers and articles on many aspects of marine electro-technology systems, and is an active member on a number of research and special interest groups. He sits on the technical committees of two leading classification societies, two international committees, and is a member of ISO, BSI and CIMAC. In 2018 Bernard took up an offer of a part-time PhD research post in the Department of Computer Science at the University of York.

Gricel is a PhD student in Computer Science at the University of York. Her research interests include formal methods, multi-robot systems (MRS), task allocation and planning, domain-specific languages for MRS, ethical concerns of autonomous systems and self-adaptive and critical systems. She received her MSc in Computational Intelligence and Robotics at the University of Sheffield. 

Jie Zou is a full-time PhD student at the University of York, working with the AAIP Group and Real-Time and Distributed Systems Group. Her research focuses on Safety Driven Timing-Predictable and Resource-Efficient Autonomous Systems, especially Mixed-Criticality Systems. After earning her Bachelor degree in Electronic and Information Engineering from China Agricultural University in 2012 and Master degree in Electrical Engineering from Technische Universität Berlin in 2015, she worked as an assistant researcher and engineer focusing on smart perception for autonomous systems in the Center for Automotive Electronics, Shenzhen Institutes of Advanced Technology (SIAT) of the Chinese Academy of Science (CAS).

Marie Curie Fellows

 

Haris Aftab is an early stage researcher under the Marie Skłodowska-Curie Innovative Training Network “SAS – Safer Autonomous Systems” within the Horizon H2020 Programme of the European Commission. He is enrolled as a full-time PhD student at the University of York and his current research focuses on ensuring the safety of conversational agents in healthcare. The research will be carried out in collaboration with KU Leuven, LAAS-CNRS, and Portable Medical Technology (PMT). His research interests include natural language processing, machine learning, conversational systems, and safety engineering.
 
Haris completed his bachelor’s degree in information technology from National University of Sciences and Technology (NUST) Pakistan in 2012. He then worked in the IT industry for about five years as a software engineer in mobile applications development for Android. His interests in technology and mobile devices led him to pursue his career in research in Internet of Things. He worked as a research assistant in the software engineering and security lab at Sejong University, Seoul, South Korea on multiple research projects including an EU Horizon 2020 research project. In parallel, he completed his master’s degree in computer and information security from South Korea in 2019.

Fang Yan is an early stage researcher under the Marie Sklodowska-Curie Innovative Training Network “SAS – Safer Autonomous Systems”. Her research interests are in the model-based assurance cases process for robotic and autonomous systems.

She is also enrolled as a PhD student at the University of York working with the AAIP group and RoboStar Group, on assurance case generation and verification technical solutions. 

Fang holds a Master degree in aviation maintenance engineering from ENSICA, France. After that, she worked in the aviation software certification field for years to provide certification service, industry advisory, teaching, and training in China.

Programme Fellows

Organisation: University of Tehran, Iran

Title: PhD candidate in software engineering

Organisation: Defence Science and Technology Laboratory (Dstl), UK

Job title: Dstl Fellow

Read Rob's AAIP blog post, The Importance of Intent.

Research papers linked to AAIP Fellowship

Organisation: Defence Science and Technology Laboratory (Dstl), UK

Job title: Senior Principal Scientist

Research papers linked to AAIP Fellowship

Organisation: Fraunhofer Institute for Cognitive Systems IKS, Germany

Job title: Research Division Director for Safety 

AAIP blog posts:

Research papers linked to AAIP Fellowship

Organisation: Argo AI, Germany

Job title: Towards Safe Perception including Machine Learning for Level 4 Automated Driving

Organisation: University of Leeds, UK

Job title: Lecturer in Aerial Robotics (Assistant Professor)

Organisation: Oxford Robotics Institute (ORI), UK

Job title: Departmental Lecturer in Robotics

Organisation: Bradford Teaching Hospitals NHS Foundation Trust

Job title: Consultant Anaesthesia and Intensive Care and Head of Clinical Artificial Intelligence

Recent research papers with AAIP team:

Organisation: Macquarie University, Sydney, Australia

Job title: Associate Professor at the Centre for Health Informatics within the Australian Institute for Health Innovation

Research papers linked to AAIP Fellowship

Organisation: Kyndryl, United Arab Emirates

Job title: General Counsel Middle East, Africa and Turkey

Blog posts on the liability of robotic and autonomous systems:

  1. Liability of autonomous systems under the UAE Civil Code
  2. Available remedies for injury or damage caused by autonomous systems
  3. A comparison with other regimes
  4. Recommendations for law, policy and ethics of AI for the United Arab Emirates (UAE)

 

Organisation: KU Leuven, Belgium

Job title: Head of Mechatronics Group (M-Group) and Associate Professor

Job title: Independent Consultant

Organisation: University of Brasilia, Brazil

Job title: Associate Professor in Computer Science

Organisation: CACI Limited, UK

Job title: Chief Architect

Organisation: Human Reliability Associates and Warwick Medical School

Job title: Senior Safety Consultant and Associate Professor of Patient Safety

Research papers linked to AAIP Fellowship and SAM demonstrator project

Information on SAM demonstrator project

Information on human factors in AI demonstrator project

Blog post: Optimising the machine and the human - the role of human factors in the safe design and use of AI in healthcare

 

Organisation: University of Liverpool

Job title: Lecturer (Assistant Professor)

Research papers linked to AAIP Fellowship