Saliency Prediction on Omnidirectional Image With Generative Adversarial Imitation Learning

IEEE Trans Image Process. 2021:30:2087-2102. doi: 10.1109/TIP.2021.3050861. Epub 2021 Jan 21.

Abstract

When watching omnidirectional images (ODIs), subjects can access different viewports by moving their heads. Therefore, it is necessary to predict subjects' head fixations on ODIs. Inspired by generative adversarial imitation learning (GAIL), this paper proposes a novel approach to predict saliency of head fixations on ODIs, named SalGAIL. First, we establish a dataset for attention on ODIs (AOI). In contrast to traditional datasets, our AOI dataset is large-scale, which contains the head fixations of 30 subjects viewing 600 ODIs. Next, we mine our AOI dataset and discover three findings: (1) the consistency of head fixations are consistent among subjects, and it grows alongside the increased subject number; (2) the head fixations exist with a front center bias (FCB); and (3) the magnitude of head movement is similar across the subjects. According to these findings, our SalGAIL approach applies deep reinforcement learning (DRL) to predict the head fixations of one subject, in which GAIL learns the reward of DRL, rather than the traditional human-designed reward. Then, multi-stream DRL is developed to yield the head fixations of different subjects, and the saliency map of an ODI is generated via convoluting predicted head fixations. Finally, experiments validate the effectiveness of our approach in predicting saliency maps of ODIs, significantly better than 11 state-of-the-art approaches. Our AOI dataset and code of SalGAIL are available online at https://github.com/yanglixiaoshen/SalGAIL.

MeSH terms

  • Adolescent
  • Adult
  • Databases, Factual
  • Deep Learning*
  • Eye-Tracking Technology
  • Female
  • Fixation, Ocular / physiology*
  • Head Movements / physiology*
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Male
  • Young Adult