Silhouette Orientation Volumes for Efficient Fall Detection in Depth Videos

IEEE J Biomed Health Inform. 2017 May;21(3):756-763. doi: 10.1109/JBHI.2016.2570300. Epub 2016 May 18.

Abstract

A novel method to detect human falls in depth videos is presented in this paper. A fast and robust shape sequence descriptor, namely the Silhouette Orientation Volume (SOV), is used to represent actions and classify falls. The SOV descriptor provides high classification accuracy even with a combination of simple associated models, such as Bag-of-Words and the Naïve Bayes classifier. Experiments on the public SDU-Fall dataset show that this new approach achieves up to 91.89% fall detection accuracy with a single-view depth camera. The classification rate is about 5% higher than the results reported in the literature. An overall accuracy of 89.63% was obtained for the six-class action recognition, which is about 25% higher than the state of the art. Moreover, a perfect silhouette-based action recognition rate of 100% is achieved on the Weizmann action dataset.

MeSH terms

  • Accidental Falls / prevention & control
  • Accidental Falls / statistics & numerical data*
  • Algorithms
  • Bayes Theorem
  • Databases, Factual
  • Female
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Male
  • Monitoring, Ambulatory / methods*
  • Pattern Recognition, Automated / methods*
  • Video Recording