Accessibility statement

Robotics and Autonomous System Safety - COM00164M

« Back to module search

  • Department: Computer Science
  • Module co-ordinator: Dr. Mark Nicholson
  • Credit value: 10 credits
  • Credit level: M
  • Academic year of delivery: 2023-24

Module summary

Robotics and Autonomous Systems (RAS) are being increasingly used as elements in safety-critical applications in a variety of domains. These technologies provide many challenges to current system safety engineering methods and assurance techniques. In this module, we will identify the nature of the safety challenges - technical, engineering and social - posed by RAS and consider their implications for legislation and regulatory guidance and for engineering practice.

No prior knowledge of RAS is required for this module - we will provide an introduction to the technologies sufficient for understanding of the safety aspects during the module.

Professional requirements


Related modules

Co-requisite modules

  • None

Prohibited combinations

  • None

Additional information

In addition to COM00017M Computers and Safety, students will need to have taken either COM00006M Foundations of System Safety Engineering or COM00100M Foundations of System Safety II. Applications for the module on a stand-alone basis will be considered, but students will need to demonstrate an equivalent level of knowledge to that covered in the pre-requisite modules.

Module will run

Occurrence Teaching cycle
A Semester 2 2023-24

Module aims

In this module, we will consider the challenges posed to safety engineering techniques and praxis by Robotics and Autonomous Systems in three broad areas: technical, engineering and social. Among the technical challenges, we will explore the nature of decision-making technologies and will consider the implications for data management, model learning, verification and deployment and understanding of the interaction between AS and the 'outside world', including humans. Engineering challenges include the elicitation and validation of safety requirements, identifying and analysing new classes of hazard and understanding how failures propagate in systems with an autonomous component, implications for incident report and investigation etc.. Social challenges include the role and expectations of the human in interactions with RAS, ethical concerns, acceptance and communication of risk and challenges for the law, governance and regulatory regimes in a number of domains. Implications for the safety case, particularly with reference to machine understanding and decision-making, will be considered throughout the module.

The module will be taught in a blended fashion, using a combination of pre-recorded lectures and live exercises sessions in which students will be taught in small groups. After the taught part of the module, students will select a topic and conduct a short critical literature review (formative). They will use this as a basis for a short talk, in a small group session, on which they will receive feedback both from other members of the group and from the course tutor. There will also be an open assessment (summative), undertaken over 7 weeks following the taught part of the module.

Module learning outcomes

By the end of this module, students will be able to:

  • Identify the disruptors - technical, engineering and social - to existing system safety engineering practices generated by RAS;
  • Describe the core elements of RAS systems engineering, sufficient for safety engineering and assurance understanding;
  • Discuss the validation and verification aspects of machine learning;
  • Describe and evaluate the implications for and changes required in safety assessment and assurance practices to accommodate RAS as emerging technologies;
  • Use consistent and clear terminology in communications about RAS engineering and safety;
  • Identify the societal impact of RAS and implications for risk acceptance;
  • Identify the potential impact of RAS on current regulatory requirements and guidance in a variety of safety-critical domains.

Module content

The following topics will be addressed in the module:

Introduction to Safety and Assurance challenges for RAS: Gains from employing RAS; Challenges – systems, society and technology; Challenges relating to the contribution of ML / AI / RAS technology to safety; Hazard identification; Philosophy of updating Standards and guidance.

Societal Acceptability of AS: Legal and governance issues; Regulations, standards and guidance; ethics; risk acceptance and communication.

Safety Assurance of AS in Complex Environments: Defining safe AS behaviour in complex environments; Analysing interactions between AS and outside world, including humans; Incident and Accident investigation.

Safety Assurance of Understanding and Decision Making in AS: Safety Requirements elicitation and validation; Failure analysis and propagation for Understanding and Decision Making in AS; Safety case for Understanding in AS

Assurance of Machine Learning in Autonomous Systems: Requirements Elicitation; Data Management; Model Learning; Model Verification; Model Deployment; Assurance case for ML


Task Length % of module mark
Coursework : Safety to Robotics and Autonomous System Safety Open Assessment
N/A 100

Special assessment rules


Additional assessment information

Reassessment of the open assessment is by resubmission.


Task Length % of module mark
Coursework : Open assessment by resubmission
N/A 100

Module feedback

Feedback on the formative assessment (Literature Review Presentation) will be received from the module tutor in a group session in which the presentation is made. There will be informal presentation (written) from other members of the group and both written and oral feedback from the module tutor.

Feedback on the summative assessment (individual open assessment) will be give in writing by the module tutor after the assessment, within the University's usual timescales.

Indicative reading


1. Russell, S.J. and Norvig, P., 2016. Artificial intelligence: a modern approach. Malaysia; Pearson Education Limited.

2. Marcus, G. and Davis, E., 2019. Rebooting AI: Building artificial intelligence we can trust. Pantheon.

3. Goodfellow, I., et. al., 2016. Deep learning (Vol. 1). Cambridge: MIT press.

4. Géron, Aurélien. Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow: Concepts, tools, and techniques to build intelligent systems. O'Reilly Media, 2019.

5. Topol, Eric. Deep medicine: how artificial intelligence can make healthcare human again. Hachette UK, 2019.


6. Liu, Yun, et al. "How to read articles that use machine learning: users’ guides to the medical literature." Jama 322.18 (2019): 1806-1816.

7. Chen, Po-Hsuan Cameron, Yun Liu, and Lily Peng. "How to develop machine learning models for healthcare." Nature materials 18.5 (2019): 410.


8. Assuring Autonomy Body of Knowledge -

The information on this page is indicative of the module that is currently on offer. The University is constantly exploring ways to enhance and improve its degree programmes and therefore reserves the right to make variations to the content and method of delivery of modules, and to discontinue modules, if such action is reasonably considered to be necessary by the University. Where appropriate, the University will notify and consult with affected students in advance about any changes that are required in line with the University's policy on the Approval of Modifications to Existing Taught Programmes of Study.