Thursday 8 March 2018, 6.30PM
Speaker(s): Professor Tim Kelly, Department of Computer Science
Self-driving cars, crew-less tankers, parcel delivery by drones, service robots for the elderly: The robots are coming! In the near future an increasing number of autonomous systems will be placed in roles, and given functions, that were previously the domain of skilled humans. With their introduction there are promised improvements in efficiency, quality of life, and even safety. But what happens when these systems fail? Will human lives be put at risk? Key questions need to be addressed in order for us to place our trust in these machines: Can ‘safe’ behaviour truly be defined and understood? Can we encode and emulate the ethics of a human operator? How can the underlying technology fail? Who is legally responsible when things go wrong? This lecture will introduce some of the ongoing research at the University of York that is addressing these important questions.
Speaker biography: Tim Kelly is Professor of High Integrity Systems within the Department of Computer Science at the University of York. He is best known for his work on system and software safety case development, particularly his work on refining and extending the Goal Structuring Notation (GSN). His research interests include safety case management, software safety analysis and justification, software architecture safety, and the dependability of “Systems of Systems”. He has supervised a number of research projects in these areas with funding and support from Airbus, BAE SYSTEMS, Data Systems and Solutions, DTI, EPSRC, ERA Technology, Ministry of Defence, QinetiQ, Rolls-Royce, and the European Commission. He has published over 150 papers on high integrity systems development and justification in international journals and conferences.
Location: Merchant Adventurers' Hall, Fossgate, York
Admission: is by free ticket only. Please book below.