Format

Send to

Choose Destination
Curr Biol. 2019 Jun 17;29(12):2066-2074.e5. doi: 10.1016/j.cub.2019.05.013. Epub 2019 May 30.

An Analysis of Decision under Risk in Rats.

Author information

1
Princeton Neuroscience Institute, Princeton University, Washington Road, Princeton, NJ 08544, USA. Electronic address: constantinople@nyu.edu.
2
Princeton Neuroscience Institute, Princeton University, Washington Road, Princeton, NJ 08544, USA.
3
Princeton Neuroscience Institute, Princeton University, Washington Road, Princeton, NJ 08544, USA; Department of Molecular Biology, Princeton University, Washington Road, Princeton, NJ 08544, USA; Howard Hughes Medical Institute, Princeton University, Washington Road, Princeton, NJ 08544, USA.

Abstract

In 1979, Daniel Kahneman and Amos Tversky published a ground-breaking paper titled "Prospect Theory: An Analysis of Decision under Risk," which presented a behavioral economic theory that accounted for the ways in which humans deviate from economists' normative workhorse model, Expected Utility Theory [1, 2]. For example, people exhibit probability distortion (they overweight low probabilities), loss aversion (losses loom larger than gains), and reference dependence (outcomes are evaluated as gains or losses relative to an internal reference point). We found that rats exhibited many of these same biases, using a task in which rats chose between guaranteed and probabilistic rewards. However, prospect theory assumes stable preferences in the absence of learning, an assumption at odds with alternative frameworks such as animal learning theory and reinforcement learning [3-7]. Rats also exhibited trial history effects, consistent with ongoing learning. A reinforcement learning model in which state-action values were updated by the subjective value of outcomes according to prospect theory reproduced rats' nonlinear utility and probability weighting functions and also captured trial-by-trial learning dynamics.

KEYWORDS:

computational model; decision-making; prospect theory; rat behavior; reinforcement learning; reward; subjective value

PMID:
31155352
DOI:
10.1016/j.cub.2019.05.013

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center