Format

Send to

Choose Destination
See comment in PubMed Commons below
Neuron. 2010 May 27;66(4):585-95. doi: 10.1016/j.neuron.2010.04.016.

States versus rewards: dissociable neural prediction error signals underlying model-based and model-free reinforcement learning.

Author information

1
Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91101, USA. glascher@hss.caltech.edu

Abstract

Reinforcement learning (RL) uses sequential experience with situations ("states") and outcomes to assess actions. Whereas model-free RL uses this experience directly, in the form of a reward prediction error (RPE), model-based RL uses it indirectly, building a model of the state transition and outcome structure of the environment, and evaluating actions by searching this model. A state prediction error (SPE) plays a central role, reporting discrepancies between the current model and the observed state transitions. Using functional magnetic resonance imaging in humans solving a probabilistic Markov decision task, we found the neural signature of an SPE in the intraparietal sulcus and lateral prefrontal cortex, in addition to the previously well-characterized RPE in the ventral striatum. This finding supports the existence of two unique forms of learning signal in humans, which may form the basis of distinct computational strategies for guiding behavior.

Comment in

  • Nature. 2010 Jul 29;466(7306):535.
PMID:
20510862
PMCID:
PMC2895323
DOI:
10.1016/j.neuron.2010.04.016
[Indexed for MEDLINE]
Free PMC Article
PubMed Commons home

PubMed Commons

0 comments
How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for Elsevier Science Icon for PubMed Central
    Loading ...
    Support Center