Display Settings:

Format

Send to:

Choose Destination
Neurosci Res. 2012 Dec;74(3-4):177-83. doi: 10.1016/j.neures.2012.09.007. Epub 2012 Oct 13.

Learning to represent reward structure: a key to adapting to complex environments.

Author information

  • 1Laboratory for Integrated Theoretical Neuroscience, RIKEN Brain Science Institute, Wako, Saitama 351-0198, Japan. hiro@brain.riken.jp

Abstract

Predicting outcomes is a critical ability of humans and animals. The dopamine reward prediction error hypothesis, the driving force behind the recent progress in neural "value-based" decision making, states that dopamine activity encodes the signals for learning in order to predict a reward, that is, the difference between the actual and predicted reward, called the reward prediction error. However, this hypothesis and its underlying assumptions limit the prediction and its error as reactively triggered by momentary environmental events. Reviewing the assumptions and some of the latest findings, we suggest that the internal state representation is learned to reflect the environmental reward structure, and we propose a new hypothesis - the dopamine reward structural learning hypothesis - in which dopamine activity encodes multiplex signals for learning in order to represent reward structure in the internal state, leading to better reward prediction.

Copyright © 2012 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

PMID:
23069349
[PubMed - indexed for MEDLINE]
PMCID:
PMC3513573
Free PMC Article

Images from this publication.See all images (1)Free text

Figure 1
PubMed Commons home

PubMed Commons

0 comments
How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for Elsevier Science Icon for PubMed Central
    Loading ...
    Write to the Help Desk