Format

Send to

Choose Destination
Neuroimage. 2016 Jul 1;134:180-191. doi: 10.1016/j.neuroimage.2016.04.019. Epub 2016 Apr 13.

Chasing probabilities - Signaling negative and positive prediction errors across domains.

Author information

1
Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Hvidovre 2650, Denmark. Electronic address: davidm@drcmr.dk.
2
Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Hvidovre 2650, Denmark.
3
Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Hvidovre 2650, Denmark; Department of Neurology, Copenhagen University Hospital Bispebjerg, Copenhagen 2400, Denmark.

Abstract

Adaptive actions build on internal probabilistic models of possible outcomes that are tuned according to the errors of their predictions when experiencing an actual outcome. Prediction errors (PEs) inform choice behavior across a diversity of outcome domains and dimensions, yet neuroimaging studies have so far only investigated such signals in singular experimental contexts. It is thus unclear whether the neuroanatomical distribution of PE encoding reported previously pertains to computational features that are invariant with respect to outcome valence, sensory domain, or some combination of the two. We acquired functional MRI data while volunteers performed four probabilistic reversal learning tasks which differed in terms of outcome valence (reward-seeking versus punishment-avoidance) and domain (abstract symbols versus facial expressions) of outcomes. We found that ventral striatum and frontopolar cortex coded increasingly positive PEs, whereas dorsal anterior cingulate cortex (dACC) traced increasingly negative PEs, irrespectively of the outcome dimension. Individual reversal behavior was unaffected by context manipulations and was predicted by activity in dACC and right inferior frontal gyrus (IFG). The stronger the response to negative PEs in these areas, the lower was the tendency to reverse choice behavior in response to negative events, suggesting that these regions enforce a rule-based strategy across outcome dimensions. Outcome valence influenced PE-related activity in left amygdala, IFG, and dorsomedial prefrontal cortex, where activity selectively scaled with increasingly positive PEs in the reward-seeking but not punishment-avoidance context, irrespective of sensory domain. Left amygdala displayed an additional influence of sensory domain. In the context of avoiding punishment, amygdala activity increased with increasingly negative PEs, but only for facial stimuli, indicating an integration of outcome valence and sensory domain during probabilistic choices.

KEYWORDS:

Domain; Probabilistic reversal learning; Reinforcement learning; Valence

[Indexed for MEDLINE]
Free full text

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center