A High-Density EEG Study Investigating VR Film Editing and Cognitive Event Segmentation Theory

Sensors (Basel). 2021 Oct 28;21(21):7176. doi: 10.3390/s21217176.

Abstract

This paper introduces a cognitive psychological experiment that was conducted to analyze how traditional film editing methods and the application of cognitive event segmentation theory perform in virtual reality (VR). Thirty volunteers were recruited and asked to watch a series of short VR videos designed in three dimensions: time, action (characters), and space. Electroencephalograms (EEG) were recorded simultaneously during their participation. Subjective results show that any of the editing methods used would lead to an increased load and reduced immersion. Furthermore, the cognition of event segmentation theory also plays an instructive role in VR editing, with differences mainly focusing on frontal, parietal, and central regions. On this basis, visual evoked potential (VEP) analysis was performed, and the standardized low-resolution brain electromagnetic tomography algorithm (sLORETA) traceability method was used to analyze the data. The results of the VEP analysis suggest that shearing usually elicits a late event-related potential component, while the sources of VEP are mainly the frontal and parietal lobes. The insights derived from this work can be used as guidance for VR content creation, allowing VR image editing to reveal greater richness and unique beauty.

Keywords: EEG; VR film; cognitive event segmentation theory; visual evoked potential.

MeSH terms

  • Cognition
  • Electroencephalography
  • Evoked Potentials
  • Evoked Potentials, Visual
  • Humans
  • Virtual Reality*