Director’s Cut – Visual Attention Analysis in Cinematic VR Content

19th September 2018
Director’s Cut – Visual Attention Analysis in Cinematic VR Content

In this page, we are sharing the Director’s cut database that we hope will enable in creating more immersive virtual reality experience for 360° film. This page contains materials and related papers for the Director’s cut research.

Abstract

Methods of storytelling in cinema have well established conventions that have been built over the course of its history and the development of the format. In 360° film, many of the techniques that have formed part of this cinematic language or visual narrative are not easily applied or are not applicable due to the nature of the format i.e. not contained the border of the screen. In this paper, we analyze how end-users view 360° video in the presence of directional cues and evaluate if they are able to follow the actual story of narrative 360° films. We first let filmmakers create an intended scan-path, the so-called director’s cut, by setting position markers in the equirectangular representation of the omnidirectional content for eight short 360° films. Alongside this, the filmmakers provided additional information regarding directional cues and plot points. Then, we performed a subjective test with 20 participants watching the films with a head-mounted display and recorded the center position of the viewports. The resulting scan-paths of the participants are then compared against the director’s cut using different scan-path similarity measures. In order to better visualize the similarity between the scan-paths, we introduce a new metric which measures and visualizes the viewport overlap between the participants’ scan-paths and the director’s cut. Finally, the entire dataset, i.e. the director’s cuts including the directional cues and plot points as well as the scan-paths of the test subjects, is publicly available with this paper.

Downloads

DataSet

CVMP Paper

Please cite our papers in your publication if it helps your research: