Inferring dependencies in Embodiment-based modular reinforcement learning

Jacob, D., Polani, D. and Nehaniv, C.L. (2005) Inferring dependencies in Embodiment-based modular reinforcement learning. TAROS, 2005. pp. 103-110.
Copy

The state-spaces needed to describe realistic--physical embodied agents are extremely large, which presents a serious challenge to classical einforcement learning schemes. In previous work--(Jacob et al., 2005a, Jacob et al., 2005b) we introduced--our EMBER (for EMbodiment-Based modulaR) reinforcement learning system, which describes a novel method for decomposing agents into modules based on the agent s embodiment. This modular decomposition factorises the statespace--and dramatically improves performance--in unknown and dynamic environments. However,--while there are great advantages to be gained from a factorised state-space, the question of dependencies cannot be ignored. We present a development of the work reported in (Jacob et al., 2004) which shows, in a simple example, how dependencies may be identified using a heuristic approach. Results show that the--system is able quickly to discover and act upon--dependencies, even where they are neither simple--nor deterministic.


picture_as_pdf
901935.pdf

View Download

EndNote BibTeX Reference Manager Refer Atom Dublin Core RIOXX2 XML MODS OPENAIRE ASCII Citation METS Data Cite XML OpenURL ContextObject in Span HTML Citation OpenURL ContextObject MPEG-21 DIDL
Export

Downloads