Show simple item record

dc.contributor.authorCos-Aguilera, Ignasi
dc.contributor.authorCañamero, Lola
dc.contributor.authorHayes, Gillian M.
dc.contributor.authorGillies, Andrew
dc.date.accessioned2013-11-21T12:22:01Z
dc.date.available2013-11-21T12:22:01Z
dc.date.issued2013
dc.identifier.citationCos-Aguilera , I , Cañamero , L , Hayes , G M & Gillies , A 2013 , ' Hedonic Value : Enhancing Adaptation for Motivated Agents ' , Adaptive Behavior , vol. 21 , no. 6 , pp. 465-483 . https://doi.org/10.1177/1059712313486817
dc.identifier.issn1059-7123
dc.identifier.urihttp://hdl.handle.net/2299/12150
dc.description.abstractReinforcement learning (RL) in the context of artificial agents is typically used to produce behavioural responses as a function of the reward obtained by interaction with the environment. When the problem consists of learning the shortest path to a goal, it is common to use reward functions yielding a fixed value after each decision, for example a positive value if the target location has been attained and a negative one at each intermediate step. However, this fixed strategy may be overly simplistic for agents to adapt to dynamic environments, in which resources may vary from time to time. By contrast, there is significant evidence that most living beings internally modulate reward value as a function of their context to expand their range of adaptivity. Inspired by the potential of this operation, we present a review of its underlying processes and we introduce a simplified formalisation for artificial agents. The performance of this formalism is tested by monitoring the adaptation of an agent endowed with a model of motivated actor-critic, embedded with our formalisation of value and constrained by physiological stability, to environments with different resource distribution. Our main result shows that the manner in which reward is internally processed as a function of the agent’s motivational state, strongly influences adaptivity of the behavioural cycles generated and the agent’s physiological stability.en
dc.format.extent341416
dc.language.isoeng
dc.relation.ispartofAdaptive Behavior
dc.subjectHedonic Value, Motivation, Reinforcement Learning, Actor-Critic, Grounding
dc.titleHedonic Value : Enhancing Adaptation for Motivated Agentsen
dc.contributor.institutionCentre for Computer Science and Informatics Research
dc.contributor.institutionSchool of Computer Science
dc.contributor.institutionScience & Technology Research Institute
dc.contributor.institutionAdaptive Systems
dc.description.statusPeer reviewed
rioxxterms.versionofrecord10.1177/1059712313486817
rioxxterms.typeJournal Article/Review
herts.preservation.rarelyaccessedtrue


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record