How do we achieve social credibility of assistive robots in the home?

This project characterised a link between social credibility and effective performance of safety-related behaviours. The team demonstrated the presence of this link in an experimental domestic environment setting, showing that an assistive robot which does not perform adequate social behaviours is also less effective at performing safety-related behaviours in the
home.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

Robot assisting a person in their home

Project report

A full project report on the team's work on identifying and characterising a link between social credibility and effective performance of safety-related behaviours.

Final project report

The challenge

In order to be accepted by end-users, assistive robots in a domestic environment must demonstrate behaviour which is empathic and socially interactive, thereby achieving a certain minimum degree of social credibility. These assistive robots must also perform functions important to safety.

Both the safety-critical and socially important behaviours of an assistive robot rely on the user's engagement with the robot. A loss of social credibility (from any cause) can lead to an end-user disengaging with the robot, choosing either to ignore its prompts or to switch it off. User disengagement compromises the ability of these robots to perform their safety-critical functions. 

How can potentially conflicting social and safety requirements be balanced, and how can we assure that the RAS is both safe and acceptable to end users? 

The research

This small feasibility project was split into two sections of work - introductory work and experimental work. 

The introductory work (Menon, 2019) identified that the social effects of assistive robots are not typically factored into hazard analysis, and equally, that there is often very little consideration of the ways in which the social performance of an assistive robot are affected by safety features (e.g. automatic stops, avoidance of physical contact). It suggested potential methods to address the loss of safety-critical functionality resulting from lowered social credibility. 

The experimental work was developed to validate the hypothesised link between social credibility and safety. The team conducted a preliminary study with 30 participants that investigated their responses when notified of different hazards by either a socially credible robot (AN) or a robot that explicitly violated social norms (VN).

Participants were asked to sit at a table and complete as many cognitive tasks (such as Soduku puzzles) as possible during an allotted time. They were told that a robot may interrupt them during the task and it was their choice whether or not to perform an action in response to the interruption. 

The team observed participants via camera feeds and smart sensors and participants were also asked to complete a questionnaire after the experiment to ascertain their impression of the robot's behaviours.

The results

This was a preliminary study so no statistical significance between conditions was expected. However, the team were able to identify a number of trends from the collected data. These trends provide some indication of how safety assurance might be affected by an autonomous system's social behaviours in this domain. The most notable impact is on a user's willingness to accept the robot's assessment of hazards and the extent to which the user considers it necessary to cross-check these against their own experience. The results indicate that when it comes to assessment of safety-critical situations, users are more likely to believe a robot that they consider socially intelligent instead of one lacking social competency.

Contact us

Assuring Autonomy International Programme

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH