Show simple item record

dc.contributor.authorBalkenius, Christian
dc.contributor.authorCanamero, Lola
dc.contributor.authorParmanets, Philip
dc.contributor.authorJohansson, Birger
dc.contributor.authorButz, Martin
dc.contributor.authorOlsson, Andreas
dc.date.accessioned2017-02-01T18:00:46Z
dc.date.available2017-02-01T18:00:46Z
dc.date.issued2016-11-03
dc.identifier.citationBalkenius , C , Canamero , L , Parmanets , P , Johansson , B , Butz , M & Olsson , A 2016 , ' Outline of a sensory-motor perspective on intrinsically moral agents ' , Adaptive Behavior , vol. 24 , no. 5 , pp. 306-319 . https://doi.org/10.1177/1059712316667203
dc.identifier.issn1059-7123
dc.identifier.urihttp://hdl.handle.net/2299/17595
dc.descriptionThis is the accepted version of the following article: Christian Balkenius, Lola Cañamero, Philip Pärnamets, Birger Johansson, Martin V Butz, and Andreas Olson, ‘Outline of a sensory-motor perspective on intrinsically moral agents’, Adaptive Behaviour, Vol 24(5): 306-319, October 2016, which has been published in final form at DOI: https://doi.org/10.1177/1059712316667203 Published by SAGE ©The Author(s) 2016
dc.description.abstractWe propose that moral behaviour of artificial agents could (and should) be intrinsically grounded in their own sensory-motor experiences. Such an ability depends critically on seven types of competencies. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their interactions with the environment and with other agents. Third, we claim that the dynamics of moral (or social) emotions closely follows that of other non-social emotions used in valuation and decision making. Fourth, we explain how moral emotions can be learned from the observation of others. Fifth, we argue that to assess social interaction, a robot should be able to learn about and understand responsibility and causation. Sixth, we explain how mechanisms that can learn the consequences of actions are necessary for a robot to make moral decisions. Seventh, we describe how the moral evaluation mechanisms outlined can be extended to situations where a robot should understand the goals of others. Finally, we argue that these competencies lay the foundation for robots that can feel guilt, shame and pride, that have compassion and that know how to assign responsibility and blame.en
dc.format.extent14
dc.format.extent446045
dc.language.isoeng
dc.relation.ispartofAdaptive Behavior
dc.subjectautonomous robots
dc.subjectembodied emotions
dc.subjectsensory-motor grouonding
dc.subjectembodied interaction
dc.subjectempathy
dc.subjectintrinsic morality
dc.titleOutline of a sensory-motor perspective on intrinsically moral agentsen
dc.contributor.institutionSchool of Computer Science
dc.contributor.institutionCentre for Computer Science and Informatics Research
dc.contributor.institutionAdaptive Systems
dc.description.statusPeer reviewed
dc.identifier.urlhttp://adb.sagepub.com/content/24/5/306.abstract
rioxxterms.versionofrecord10.1177/1059712316667203
rioxxterms.typeJournal Article/Review
herts.preservation.rarelyaccessedtrue


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record