Format

Send to

Choose Destination
Nat Protoc. 2019 Jul;14(7):2152-2176. doi: 10.1038/s41596-019-0176-0. Epub 2019 Jun 21.

Using DeepLabCut for 3D markerless pose estimation across species and behaviors.

Author information

1
Rowland Institute at Harvard, Harvard University, Cambridge, MA, USA.
2
Department of Molecular & Cellular Biology, Harvard University, Cambridge, MA, USA.
3
Department of Electrical Engineering, University of Cape Town, Cape Town, South Africa.
4
Tübingen AI Center & Centre for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany.
5
Rowland Institute at Harvard, Harvard University, Cambridge, MA, USA. mackenzie@post.harvard.edu.

Abstract

Noninvasive behavioral tracking of animals during experiments is critical to many scientific pursuits. Extracting the poses of animals without using markers is often essential to measuring behavioral effects in biomechanics, genetics, ethology, and neuroscience. However, extracting detailed poses without markers in dynamically changing backgrounds has been challenging. We recently introduced an open-source toolbox called DeepLabCut that builds on a state-of-the-art human pose-estimation algorithm to allow a user to train a deep neural network with limited training data to precisely track user-defined features that match human labeling accuracy. Here, we provide an updated toolbox, developed as a Python package, that includes new features such as graphical user interfaces (GUIs), performance improvements, and active-learning-based network refinement. We provide a step-by-step procedure for using DeepLabCut that guides the user in creating a tailored, reusable analysis pipeline with a graphical processing unit (GPU) in 1-12 h (depending on frame size). Additionally, we provide Docker environments and Jupyter Notebooks that can be run on cloud resources such as Google Colaboratory.

PMID:
31227823
DOI:
10.1038/s41596-019-0176-0
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Nature Publishing Group
Loading ...
Support Center