Format

Send to

Choose Destination
Cogn Affect Behav Neurosci. 2014 Jun;14(2):473-92. doi: 10.3758/s13415-014-0277-8.

Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.

Author information

1
Gatsby Computational Neuroscience Unit, University College London, London, UK, dayan@gatsby.ucl.ac.uk.

Abstract

Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations, and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response, and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation.

PMID:
24647659
PMCID:
PMC4074442
DOI:
10.3758/s13415-014-0277-8
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for Springer Icon for PubMed Central
Loading ...
Support Center