Show simple item record

dc.contributor.authorVan Dijk, S.G.
dc.contributor.authorPolani, D.
dc.date.accessioned2011-11-01T15:01:09Z
dc.date.available2011-11-01T15:01:09Z
dc.date.issued2011-01-01
dc.identifier.citationVan Dijk , S G & Polani , D 2011 , Grounding subgoals in information transitions . in Procs of 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning : ADPRL 2011 . Symposium Series on Computational Intelligence , Institute of Electrical and Electronics Engineers (IEEE) , pp. 105-111 , 2011 IEEE Symposium on Adaptive Dynamic Programming And Reinforcement Learning (ADPRL) , Paris , France , 11/04/11 . https://doi.org/10.1109/ADPRL.2011.5967384
dc.identifier.citationconference
dc.identifier.isbn978-1-4244-9887-1
dc.identifier.otherORCID: /0000-0002-3233-5847/work/86098031
dc.identifier.urihttp://hdl.handle.net/2299/6854
dc.description“This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”
dc.description.abstractIn reinforcement learning problems, the construction of subgoals has been identified as an important step to speed up learning and to enable skill transfer. For this purpose, one typically extracts states from various saliency properties of an MDP transition graph, most notably bottleneck states. Here we introduce an alternative approach to this problem: assuming a family of MDPs with multiple goals but with a fixed transition graph, we introduce the relevant goal information as the amount of Shannon information that the agent needs to maintain about the current goal at a given state to select the appropriate action. We show that there are distinct transition states in the MDP at which new relevant goal information has to be considered for selecting the next action. We argue that these transition states can be interpreted as subgoals for the current task class, and we use these states to automatically create a hierarchical policy, according to the well-established Options model for hierarchical reinforcement learning.en
dc.format.extent7
dc.format.extent203820
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.ispartofProcs of 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning
dc.relation.ispartofseriesSymposium Series on Computational Intelligence
dc.titleGrounding subgoals in information transitionsen
dc.contributor.institutionSchool of Computer Science
dc.contributor.institutionScience & Technology Research Institute
dc.identifier.urlhttp://www.scopus.com/inward/record.url?scp=80052250027&partnerID=8YFLogxK
rioxxterms.versionofrecord10.1109/ADPRL.2011.5967384
rioxxterms.typeOther
herts.preservation.rarelyaccessedtrue


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record