Learning Actionlet Ensemble for 3D Human Action Recognition

IEEE Trans Pattern Anal Mach Intell. 2014 May;36(5):914-27. doi: 10.1109/TPAMI.2013.198.

Abstract

Human action recognition is an important yet challenging task. Human actions usually involve human-object interactions, highly articulated motions, high intra-class variations, and complicated temporal structures. The recently developed commodity depth sensors open up new possibilities of dealing with this problem by providing 3D depth data of the scene. This information not only facilitates a rather powerful human motion capturing technique, but also makes it possible to efficiently model human-object interactions and intra-class variations. In this paper, we propose to characterize the human actions with a novel actionlet ensemble model, which represents the interaction of a subset of human joints. The proposed model is robust to noise, invariant to translational and temporal misalignment, and capable of characterizing both the human motion and the human-object interactions. We evaluate the proposed approach on three challenging action recognition datasets captured by Kinect devices, a multiview action recognition dataset captured with Kinect device, and a dataset captured by a motion capture system. The experimental evaluations show that the proposed approach achieves superior performance to the state-of-the-art algorithms.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Actigraphy / methods
  • Algorithms
  • Humans
  • Image Enhancement / methods
  • Image Interpretation, Computer-Assisted / methods
  • Imaging, Three-Dimensional / methods*
  • Movement / physiology*
  • Pattern Recognition, Automated / methods*
  • Photography / methods*
  • Reproducibility of Results
  • Sensitivity and Specificity
  • Subtraction Technique
  • Video Recording / methods*
  • Whole Body Imaging / methods*