
Overview:
360-degree video, also called live-action virtual reality (VR), is one of the latest and most powerful trends in immersive media, with an increasing potential for the next decades. In particular, head-mounted display (HMD) technology like e.g. HTC Vive, Oculus Rift and Samsung Gear VR is maturing and entering professional and consumer markets. On the other side, capture devices like e.g. Facebook’s Surround 360 camera, Nokia Ozo and Google Odyssee are some of the latest technologies to capture 360-degree video in stereoscopic 3D (S3D).
However, capturing 360-degree videos is not an easy task as there are many physical limitations which need to be overcome, especially for capturing and post-processing in S3D. In general, such limitations result in artifacts which cause visual discomfort when watching the content with a HMD. The artifacts or issues can be divided into three categories: binocular rivalry issues, conflicts of depth cues and artifacts which occur in both monocular and stereoscopic 360-degree content production. Issues of the first two categories have been investigated for standard S3D content e.g. for cinema screens and 3D-TV. The third category consists of typical artifacts which only occur in multi-camera systems used for panorama capturing. As native S3D 360-degree video production is still very error-prone, especially with respect to binocular rivalry issues, many high-end S3D productions are shot in 2D 360-degree and post-converted to S3D.
Within the project QualityVR, our group is working on video analysis tools to detect, assess and partly correct artefacts which occur in stereoscopic 360-degrees video production, in particular, conflicts of depth cues and binocular rivalry issues.
Publications:
2017
|
Croci, Simone; Knorr, Sebastian; Smolic, Aljosa Saliency-Based Sharpness Mismatch Detection For Stereoscopic Omnidirectional Images Inproceedings Forthcoming 14th European Conference on Visual Media Production, London, UK, Forthcoming. Abstract | Links | BibTeX @inproceedings{Croci2017a,
title = {Saliency-Based Sharpness Mismatch Detection For Stereoscopic Omnidirectional Images},
author = {Simone Croci and Sebastian Knorr and Aljosa Smolic},
url = {https://v-sense.scss.tcd.ie/wp-content/uploads/2017/10/2017_CVMP_Saliency-Based-Sharpness-Mismatch-Detection-For-Stereoscopic-Omnidirectional-Images.pdf},
year = {2017},
date = {2017-12-11},
booktitle = {14th European Conference on Visual Media Production},
address = {London, UK},
abstract = {In this paper, we present a novel sharpness mismatch detection (SMD) approach for stereoscopic omnidirectional images (ODI) for quality control within the post-production work ow, which is the main contribution. In particular, we applied a state of the art SMD approach, which was originally developed for traditional HD images, and extended it to stereoscopic ODIs. A new e cient method for patch extraction from ODIs was developed based on the spherical Voronoi diagram of equidistant points evenly distributed on the sphere. The subdivision of the ODI into patches allows an accurate detection and localization of regions with sharpness mismatch. A second contribution of the paper is the integration of saliency into our SMD approach. In this context, we introduce a novel method for the estimation of saliency maps from viewport data of head-mounted displays (HMD). Finally, we demonstrate the performance of our SMD approach with data collected from a subjective test with 17 participants.},
keywords = {},
pubstate = {forthcoming},
tppubtype = {inproceedings}
}
In this paper, we present a novel sharpness mismatch detection (SMD) approach for stereoscopic omnidirectional images (ODI) for quality control within the post-production work ow, which is the main contribution. In particular, we applied a state of the art SMD approach, which was originally developed for traditional HD images, and extended it to stereoscopic ODIs. A new e cient method for patch extraction from ODIs was developed based on the spherical Voronoi diagram of equidistant points evenly distributed on the sphere. The subdivision of the ODI into patches allows an accurate detection and localization of regions with sharpness mismatch. A second contribution of the paper is the integration of saliency into our SMD approach. In this context, we introduce a novel method for the estimation of saliency maps from viewport data of head-mounted displays (HMD). Finally, we demonstrate the performance of our SMD approach with data collected from a subjective test with 17 participants. |
Croci, Simone; Knorr, Sebastian; Goldmann, Lutz; Smolic, Aljosa A Framework for Quality Control in Cinematic VR Based on Voronoi Patches and Saliency Inproceedings Forthcoming International Conference on 3D Immersion, Brussels, Belgium, Forthcoming. Abstract | Links | BibTeX @inproceedings{Croci2017b,
title = {A Framework for Quality Control in Cinematic VR Based on Voronoi Patches and Saliency},
author = {Simone Croci and Sebastian Knorr and Lutz Goldmann and Aljosa Smolic},
url = {https://v-sense.scss.tcd.ie/wp-content/uploads/2017/10/2017_IC3D_A-FRAMEWORK-FOR-QUALITY-CONTROL-IN-CINEMATIC-VR-BASED-ON-VORONOI-PATCHES-AND-SALIENCY.pdf},
year = {2017},
date = {2017-12-11},
booktitle = {International Conference on 3D Immersion},
address = {Brussels, Belgium},
abstract = {In this paper, we present a novel framework for quality control in cinematic VR (360-video) based on Voronoi patches and saliency which can be used in post-production workflows. Our approach first extracts patches in stereoscopic omnidirectional images (ODI) using the spherical Voronoi diagram. The subdivision of the ODI into patches allows an accurate detection and localization of regions with artifacts. Further, we introduce saliency in order to weight detected artifacts according to the visual attention of end-users. Then, we propose different artifact detection and analysis methods for sharpness mismatch detection (SMD), color mismatch detection (CMD) and disparity distribution analysis. In particular, we took two state of the art approaches for SMD and CMD, which were originally developed for conventional planar images, and extended them to stereoscopic ODIs. Finally, we evaluated the performance of our framework with a dataset of 18 ODIs for which saliency maps were obtained from a subjective test with 17 participants.},
keywords = {},
pubstate = {forthcoming},
tppubtype = {inproceedings}
}
In this paper, we present a novel framework for quality control in cinematic VR (360-video) based on Voronoi patches and saliency which can be used in post-production workflows. Our approach first extracts patches in stereoscopic omnidirectional images (ODI) using the spherical Voronoi diagram. The subdivision of the ODI into patches allows an accurate detection and localization of regions with artifacts. Further, we introduce saliency in order to weight detected artifacts according to the visual attention of end-users. Then, we propose different artifact detection and analysis methods for sharpness mismatch detection (SMD), color mismatch detection (CMD) and disparity distribution analysis. In particular, we took two state of the art approaches for SMD and CMD, which were originally developed for conventional planar images, and extended them to stereoscopic ODIs. Finally, we evaluated the performance of our framework with a dataset of 18 ODIs for which saliency maps were obtained from a subjective test with 17 participants. |
Knorr, Sebastian; Croci, Simone; Smolic, Aljosa A Modular Scheme for Artifact Detection in Stereoscopic Omni-Directional Images Inproceedings Irish Machine Vision and Image Processing Conference, Maynooth, Ireland, 2017. Abstract | Links | BibTeX @inproceedings{Knorr2017,
title = {A Modular Scheme for Artifact Detection in Stereoscopic Omni-Directional Images},
author = { Sebastian Knorr and Simone Croci and Aljosa Smolic},
url = {https://v-sense.scss.tcd.ie/wp-content/uploads/2017/07/imvip2017_knorr_final.pdf
},
year = {2017},
date = {2017-08-30},
booktitle = {Irish Machine Vision and Image Processing Conference},
address = {Maynooth, Ireland},
abstract = {With the release of new head-mounted displays (HMDs) and new omni-directional capture systems, 360-degree video is one of the latest and most powerful trends in immersive media, with an increasing potential for the next decades. However, especially creating 360-degree content in 3D is still an error-prone task with many limitations to overcome. This paper describes the critical aspects of 3D content creation for 360-degree video. In particular, conflicts of depth cues and binocular rivalry are reviewed in detail, as these cause eye fatigue, headache, and even nausea. Both the reasons for the appearance of the conflicts and how to detect some of these conflicts by objective image analysis methods are detailed in this paper. The latter is the main contribution of this paper and part of long-term research roadmap of the authors in order to provide a comprehensive framework for artifact detection and correction in 360-degree videos. Then, experimental results are demonstrating the performance of the proposed approaches in terms of objective measures and visual feedback. Finally, the paper concludes with a discussion and future work.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
With the release of new head-mounted displays (HMDs) and new omni-directional capture systems, 360-degree video is one of the latest and most powerful trends in immersive media, with an increasing potential for the next decades. However, especially creating 360-degree content in 3D is still an error-prone task with many limitations to overcome. This paper describes the critical aspects of 3D content creation for 360-degree video. In particular, conflicts of depth cues and binocular rivalry are reviewed in detail, as these cause eye fatigue, headache, and even nausea. Both the reasons for the appearance of the conflicts and how to detect some of these conflicts by objective image analysis methods are detailed in this paper. The latter is the main contribution of this paper and part of long-term research roadmap of the authors in order to provide a comprehensive framework for artifact detection and correction in 360-degree videos. Then, experimental results are demonstrating the performance of the proposed approaches in terms of objective measures and visual feedback. Finally, the paper concludes with a discussion and future work. |