Format

Send to

Choose Destination
Elife. 2019 Nov 11;8. pii: e47463. doi: 10.7554/eLife.47463.

One-shot learning and behavioral eligibility traces in sequential decision making.

Author information

1
Brain-Mind-Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
2
School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
3
Laboratory of Psychophysics, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
4
Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland.
#
Contributed equally

Abstract

In many daily tasks, we make multiple decisions before reaching a goal. In order to learn such sequences of decisions, a mechanism to link earlier actions to later reward is necessary. Reinforcement learning (RL) theory suggests two classes of algorithms solving this credit assignment problem: In classic temporal-difference learning, earlier actions receive reward information only after multiple repetitions of the task, whereas models with eligibility traces reinforce entire sequences of actions from a single experience (one-shot). Here, we show one-shot learning of sequences. We developed a novel paradigm to directly observe which actions and states along a multi-step sequence are reinforced after a single reward. By focusing our analysis on those states for which RL with and without eligibility trace make qualitatively distinct predictions, we find direct behavioral (choice probability) and physiological (pupil dilation) signatures of reinforcement learning with eligibility trace across multiple sensory modalities.

KEYWORDS:

eligibility trace; human; human learning; neuroscience; pupillometry; reinforcement learning; reward prediction error; sequential decision making

PMID:
31709980
DOI:
10.7554/eLife.47463
Free full text

Supplemental Content

Full text links

Icon for eLife Sciences Publications, Ltd
Loading ...
Support Center