How do we achieve social credibility of assistive robots in the home?

In order to be accepted by end-users, assistive robots in a domestic environment must demonstrate behaviour which is empathic and socially interactive, thereby achieving a certain minimum degree of social credibility. In addition to this, these assistive robots must also perform functions important to safety.

This small feasibility project hypothesised that there is a link between social credibility and effective performance of safety-related behaviours. For example, a robot with low social credibility may be perceived as annoying or irritating. This can lead to user disengagement with the robot, ranging from ignoring its alerts to switching it off. Because a robot’s safety behaviours are based on alerting the user, user disengagement reduces the effectiveness of these safety behaviours and makes it difficult to adequately assure safety.

As a foundation for the project the team have postulated that the following questions represents a critical barrier to assurance and regulation (C-BAR):

Social acceptability: where user acceptance of a RAS depends on the effective performance of social functions, how can potentially conflicting social and safety requirements be balanced, and how can we assure that the RAS is both safe and acceptable to end users?

Introductory work

The introductory work for this project was presented (Menon, 2019) in March 2019 at the 9th International Conference on Performance, Safety and Robustness in Complex Systems and Applications, and was awarded a Best Paper prize. In that paper the literature review identified that the social effects of assistive robots are not typically factored into hazard analysis, and equally, that there is often very little consideration of the ways in which the social performance of an assistive robot are affected by safety features (e.g. automatic stops, avoidance of physical contact).

The paper examined how both the safety-critical and socially important behaviours of an assistive robot rely on the user's engagement with the robot. A loss of social credibility (from any cause) can lead to an end-user disengaging with the robot, choosing either to ignore its prompts or to switch it off. User disengagement compromises the ability of these robots to perform their safety-critical functions.

The paper suggests potential methods to address the loss of safety-critical functionality resulting from lowered social credibility. Each method trades a slight decrease in the robot's overall capability in return for maintaining an adequate level of social credibility. The team suggest that when the robot's social credibility drops below a threshold value (termed the disengagement threshold), the robot alters the nature of its alerts and reminders to stop social credibility loss. For example, the robot may identify those alerts which are not safety-critical and choose to:

  • avoid performing the alert entirely
  • delay the alert of perform it less frequently
  • slow its physical movements when coming to interrupt a user
  • decrease the volume of audible alerts

Experimental work

The second major aim of the project was to perform an experiment that would validate the hypothesised link between social credibility and safety. The team conducted a preliminary study with 30 participants that investigated their responses when notified of different hazards by either a socially credible robot or a robot that explicitly violated social norms.

The study was carried out in the Robot House, a four-bedroom home used by the University of Hertfordshire for human-robot experiments. It is fitted with standard furniture and appliances as well as smart home sensors and actuators.

Participants were asked to sit at a table and complete as many cognitive tasks (such as Soduku puzzles) as possible during an allotted time. They were told that a robot may interrupt them during the task and it was their choice whether or not to perform an action in response to the interruption. During the experiment all participants were interrupted four times:

  1. the robot informed them that the oven in the kitchen was left on
  2. the robot informed them that the power sockets in the kitchen were on
  3. the robot informed them that some of the power sockets in the kitchen were still on
  4. the robot informed them a Pepper robot in a different room was overheating while charging

Of the 30 participants, 15 were chosen randomly to work with a robot which violated social norms (VN) and 15 randomly chosen to work with a robot which complied with social norms (AN). This condition was determined by robot behaviours such as distance during greeting (appropriate vs too far), position during interruptions (front vs from behind), or verbal utterances (abrupt vs polite).

The team observed participants via camera feeds and smart sensors and made objective measurements of:

  • physical responses to interruptions (e.g. standing up)
  • movement made in response to interruptions (e.g. going into the kitchen)
  • extent of action taken to eliminate hazard (e.g. switching off power sockets)
  • time taken to perform an action that eliminates the hazard

Participants were also asked to complete a questionnaire after the experiment to ascertain their impression of the robot's behaviours.

Results

This was a preliminary study so no statistical significance between conditions was expected. However, the team were able to identify a number of trends from the collected data.

Questionnaire results:

  • clearly show that the participants in the AN condition-set considered the robot much more socially credible than participants in the VN condition-set
  • users considered the oven to be the most safety-critical hazard, followed by the overheating Pepper robot, then the kitchen power sockets

Measurable results:

  • the AN condition-set participants were more likely to respond to the robot's interruptions than the VN condition-set participants for the oven hazard (79% vs 50%), the second power socket warning (71% vs 31%), and the Pepper robot warning (79% vs 56%)
  • when AN participants responded to the robot they were much more likely to take actions which correspond to mitigating the hazard (e.g. turning the oven off). By contract, when VN participants responded their actions in many cases were observational only (e.g. examine the environment only)
  • one of the most notable results was in the second power socket warning - this interruption elicited the lowest response rate (31%) from VN participants, while the response from AN participants remained high (71%)

Further discussion

The identified trends provide some indication of how safety assurance might be affected by an autonomous system's social behaviours in this domain. The most notable impact is on user's willingness to accept the robot's assessment of hazards and the extent to which the user considers it necessary to cross-check these against their own experience. VN users indicated that they were checking to see if the robot was "right" about the existence of the hazard.

This behaviour of disbelief was seen most clearly with the second power socket warning, with the results demonstrating that AN participants were more likely to accept the robot's assessment of a situation, even where this directly contradicts their own experience. This general effect can be taken to indicate that when it comes to assessment of safety-critical situations, users are more likely to believe a robot that they consider socially intelligent instead of one lacking social competency.

The results of this experiment, along with a discussion of the implications for safety assurance of assistive robots, will be published in December 2019 at the 11th International Conference on Social Robotics (Holthaus, 2019).

Papers

Body of Knowledge guidance - read the guidance created by the team on considering human/machine interactions

Project team