Risk Imposition by Artificial Agents: The Moral Proxy Problem

  • Date and time: Wednesday 10 March 2021, 4.00pm to 5.30pm
  • Location: Online via Zoom
  • Admission: Colloquium members and postgraduate students

Event details

Abstract: 
 
It seems undeniable that the coming years will see an ever-increasing reliance on artificial agents that are, on the one hand, autonomous in the sense that they process information and make decisions without continuous human input, and, on the other hand, fall short of the kind of agency that would warrant ascribing moral responsibility to the artificial agent itself. What I have in mind here are artificial agents such as self-driving cars, artificial trading agents in financial markets, nursebots or robot teachers. As these examples illustrate, many such agents make morally significant decisions, including ones that involve risks of severe harm to humans. Where such artificial agents are employed, the ambition is that they can make decisions roughly as good or better than those that a typical human agent would have made in the context of their employment. Still, the standard by which we judge their choices to be good or bad is still considered human judgement; we would like these artificial agents to serve human ends.
 
Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I will call the ‘Moral Proxy Problem’ arises because it is often not clear who a specific artificial agent is a moral proxy for.
 
Dr Johanna Thoma, Associate Professor, LSE