Society and ethics
Considering practical ethics and ensuring systems are acceptable and desirable.
Robotics and autonomous systems have the potential to deliver societal benefits and should always be deployed to support the public good.
In addition to the safety perspective, there are ethical issues such as the potential for bias and discrimination, and the possibility of deploying systems for which the ethical and legal responsibility is unclear in the event of an accident or incident.
The work will build on existing collaborations, such as those between computer science and philosophy, on how to use principles from practical ethics to inform the design and assessment of systems.
It will also use these principles to inform governance structures, including the ways in which responsibility is managed across the many stakeholders involved. There will also be a major emphasis on public attitudes to technology, in particular how they can gain the trust necessary to use systems.
- Application of practical ethics to the assurance, regulation and governance of artificial intelligence and autonomy.
- Assessment of the nature of responsibility and legal ‘gaps’ arising from the use of artificial intelligence and autonomy, and use of these to guide system design.
- Understanding of public attitudes to artificial intelligence and autonomy, including how the public come to trust such technology.
Research Lead: Dr Philip Garnett
Philip Garnett is an Associate Professor in Systems and Organisation in the School for Business and Society, University of York. He is also on the management board of the Science and Technology Studies Unit (SATSU) in York.
Philip’s research focuses on the impact of advanced technology on organisations and society. He is interested in how algorithms, machine learning, and artificial intelligence influence decision making processes and information security within organisations. In particular, how humans, machines and computers work together and the consequences of that collaboration.