Investigating Human Perceptions of Trust and Social Cues in Robots for Safe Human-Robot Interaction in Human-oriented Environments
Abstract
As robots increasingly take part in daily living activities, humans will have to
interact with them in domestic and other human-oriented environments. This thesis
envisages a future where autonomous robots could be used as home companions
to assist and collaborate with their human partners in unstructured environments
without the support of any roboticist or expert. To realise such a vision, it is important
to identify which factors (e.g. trust, participants’ personalities and background
etc.) that influence people to accept robots’ as companions and trust the robots to
look after their well-being. I am particularly interested in the possibility of robots
using social behaviours and natural communications as a repair mechanism to
positively influence humans’ sense of trust and companionship towards the robots.
The main reason being that trust can change over time due to different factors
(e.g. perceived erroneous robot behaviours). In this thesis, I provide guidelines
for a robot to regain human trust by adopting certain human-like behaviours. I
can expect that domestic robots will exhibit occasional mechanical, programming
or functional errors, as occurs with any other electrical consumer devices. For
example, these might include software errors, dropping objects due to gripper
malfunctions, picking up the wrong object or showing faulty navigational skills due
to unclear camera images or noisy laser scanner data respectively. It is therefore
important for a domestic robot to have acceptable interactive behaviour when
exhibiting and recovering from an error situation. In this context, several open
questions need to be addressed regarding both individuals’ perceptions of the errors
and robots, and the effects of these on people’s trust in robots.
As a first step, I investigated how the severity of the consequences and the timing
of a robot’s different types of erroneous behaviours during an interaction may have
different impact on users’ attitudes towards a domestic robot. I concluded that
there is a correlation between the magnitude of an error performed by the robot and
the corresponding loss of trust of the human in the robot. In particular, people’s
trust was strongly affected by robot errors that had severe consequences.
This led us to investigate whether people’s awareness of robots’ functionalities may
affect their trust in a robot. I found that people’s acceptance and trust in the robot
may be affected by their knowledge of the robot’s capabilities and its limitations
differently according the participants’ age and the robot’s embodiment.
In order to deploy robots in the wild, strategies for mitigating and re-gaining
people’s trust in robots in case of errors needs to be implemented. In the following
three studies, I assessed if a robot with awareness of human social conventions
would increase people’s trust in the robot. My findings showed that people almost
blindly trusted a social and a non-social robot in scenarios with non-severe error
consequences. In contrast, people that interacted with a social robot did not trust
its suggestions in a scenario with a higher risk outcome.
Finally, I investigated the effects of robots’ errors on people’s trust of a robot over
time. The findings showed that participants’ judgement of a robot is formed during
the first stage of their interaction. Therefore, people are more inclined to lose trust
in a robot if it makes big errors at the beginning of the interaction.
The findings from the Human-Robot Interaction experiments presented in this
thesis will contribute to an advanced understanding of the trust dynamics between
humans and robots for a long-lasting and successful collaboration.
Publication date
2020-10-12Published version
https://doi.org/10.18745/th.23412https://doi.org/10.18745/th.23412
Funding
Default funderDefault project
Other links
http://hdl.handle.net/2299/23412Metadata
Show full item recordThe following license files are associated with this item:
Related items
Showing items related by title, author, creator and subject.
-
Intrinsically Motivated Autonomy in Human-Robot Interaction: Human Perception of Predictive Information in Robots
Scheunemann, Marcus M.; Salge, Christoph; Dautenhahn, Kerstin (Springer Nature Link, 2019-06-28)In this paper we present a fully autonomous and intrinsically motivated robot usable for HRI experiments. We argue that an intrinsically motivated approach based on the Predictive Information formalism, like the one presented ... -
Exploring robot etiquette : Refining a HRI home companion scenario based on feedback from two artists who lived with robots in the UH robot house
Koay, K.L.; Walters, M.L.; May, A.; Dumitriu, A.; Christianson, B.; Burke, N.; Dautenhahn, K. (Springer Nature Link, 2013-12)This paper presents an exploratory Human-Robot Interaction study which investigated robot etiquette, in particular focusing on understanding the types and forms of robot behaviours that people might expect from a robot ... -
The Design Space for Robot Appearance and Behaviour for Social Robot Companions
Walters, M.L. (2008-03-17)To facilitate necessary task-based interactions and to avoid annoying or upsetting people a domestic robot will have to exhibit appropriate non-verbal social behaviour. Most current robots have the ability to sense and ...