Show simple item record

dc.contributor.authorErdeniz, B.
dc.contributor.authorAtalay, N.B.
dc.identifier.citationErdeniz , B & Atalay , N B 2010 , Simulating probability learning and probabilistic reversal learning using the attention-gated reinforcement learning (AGREL) model . in Procs of IEEE International Joint Conference on Neural Networks (IJCNN) No.11593723 . IEEE , pp. 1-6 .
dc.identifier.otherPURE: 92221
dc.identifier.otherPURE UUID: a9221472-88bc-41b0-9cf7-c87fafeb324e
dc.identifier.otherdspace: 2299/5717
dc.identifier.otherScopus: 79959458600
dc.description“This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.” [Full text of this article is not available in the UHRA]
dc.description.abstractIn a probability learning task, participants estimate the probabilistic reward contingencies, and this task has been used extensively to study instrumental conditioning with partial reinforcement. In the probabilistic reversal learning task, the probabilistic reward contingencies are reversed between options in the middle of the experiment to measure how well people adapt to new contingency situations. In this work, we used the attention-gated reinforcement learning (AGREL) model (Roelfsema & Van Ooyen, 2005) to simulate how people learn the probabilistic relationship between stimulus-reward pairs in probability and reversal learning tasks. AGREL algorithm put forward two important aspects of a learning phenomenon together in a neural network scheme: (1) the effect of unexpected outcomes on learning and (2) the effect of top-down (selective) attention on updating weights. Contrary to its importance in the learning literature, AGREL has not yet been tested with these well known learning tasks. The results of the first simulation showed that in a binary choice probability learning experiment an AGREL model can simulate different learning strategies, such as probability matching and maximizing. Secondly, we simulated a probabilistic reversal learning experiment with the same AGREL model, and the results showed that the AGREL model dynamically adapted to new contingency situations. Furthermore, we also evaluated effects of learning rate on the model's adaption to reversal contingency by plotting the interphase dynamics. These results showed that AGREL model simulates the traditional findings observed in probability and reversal learning experiments, and it can be further developed to understand the role of dopamine in learning and it can be used in model-based fMRI research.en
dc.relation.ispartofProcs of IEEE International Joint Conference on Neural Networks (IJCNN) No.11593723
dc.titleSimulating probability learning and probabilistic reversal learning using the attention-gated reinforcement learning (AGREL) modelen
dc.contributor.institutionSchool of Computer Science

Files in this item


There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record