Light fields imaging technologies

Light fields capture all light rays passing through a given volume of space.
Compared to traditional 2D imaging systems which capture the spatial intensity of the light rays, 4D light fields also contain the angular direction of light rays.
This additional information allows for multiple applications in different research areas such as image processing, computer vision, and computer graphics, including (but not limited to) the reconstruction of the 3D geometry of a scene, creating new images from virtual point of view, or changing the focus of an image after it is captured.
Light fields are also a growing topic of interest in the VR/AR community.

In V-SENSE, we are currently investigating novel methods for light field denoising, scene reconstruction from light field, and light field rendering.

You can find here examples of light fields captured with a Lytro Illum camera, which allows for refocusing and changing the perspective.

 

2017

Alain, Martin; Smolic, Aljosa

Light Field Denoising by Sparse 5D Transform Domain Collaborative Filtering Inproceedings

IEEE International Workshop on Multimedia Signal Processing (MMSP 2017), 2017.

Abstract | Links | BibTeX

Chen, Yang; Alain, Martin; Smolic, Aljosa

Fast and Accurate Optical Flow based Depth Map Estimation from Light Fields Inproceedings

Irish Machine Vision and Image Processing Conference (Received the Best Paper Award), 2017.

Abstract | Links | BibTeX

ODIs Saliency Maps: Testbed and Dataset

Introduction:

In V-Sense we are studying saliency for omnidirectional images (ODIs) in VR application. We have conducted subjective experiments to collect viewport center trajectories (VCTs) of 32 participants for 21 ODIs and propose a method to transform the gathered data into saliency maps. More details in:  Ana De Abreu, Cagri Ozcinar, Aljosa Smolic, Look Around You: Saliency Maps for Omnidirectional Images in VR Applications, QoMex, 2017

We believe that the data collected and the testbed used, facilitate a better understanding of human viewing behavior and will serve on the development of saliency models for omnidirectional images (ODIs). Thus, in this website we share the collected data used to build the saliency maps and the used testbed.

If you decide to use our testbed and/or dataset, please provide a reference to our paper: Ana De Abreu, Cagri Ozcinar, Aljosa Smolic, Look Around You: Saliency Maps for Omnidirectional Images in VR Applications, QoMex, 2017

Dataset

We have considered 22 indoor and outdoor ODIs in equirectangular format (1 ODI for training and 21 ODIs for the test). These ODIs have beed downloaded from the social photography site Flickr.  We have only considered images under the Creative Commons (CC) license. From the downloads section below, a metadata that contains the attributions for each ODI is available.  A total of 32 participants took part of our subjective test and an identifier was associated with each participant, to keep their anonymity. Participants were split into two groups consisting of 16 participants each, and each ODI was presented for 10s to one group and 20s to the other group.

Dataset structure

The dataset contains 32 csv files, which corresponds to the 32 participants. The name of each file specify the exposition time, 10000 or 20000 (in ms), and the participant ID. For each ODI, the 2D coordinates of the viewport center and the points defining the viewport limits at each time stamp are presented. Note that these points have been projected from the sphere used for redering to the planar representation. The Figure below illustrates the center of the viewport as well as the six points we have used to define the viewport limits.

Equirectangular ODI and user viewport. ODI: Maasboulevard festival 2006, author: Aldo Hoeben

In particular, each line of the csv file has the following structure: counter per ODI – ODI name – time stamp – viewport center X – viewport center Y – Point 1 X –  Point 1 Y- Point 2 X –  Point 2 Y- Point 3 X –  Point 3 Y- Point 4 X –  Point 4 Y – Point 5 X –  Point 5 Y- Point 6 X –  Point 6 Y

Note that the ODIs named Background, are ODIs used to give instructions to the participants. For instance, they are used at the begining or at the end of the test, as well as, between training and test sessions.

Testbed

The designed testbed is a software application that permits the display of ODIs in an HMD while it collects viewport data of the test participants. The testbed has been implemented using the WebVR and the ThreeJS APIs.

Please, note that as we have used this testbed for the Oculus DK2 HMD, it relies on the person conducting the subjective experiment to click on the right arrow in the keyboard to start the trainint and test sessions whenever the participant is ready.

Testbed folder structure

  • Index.html — file to be opened using a compatible browser (please, refer to Testbed Requirements below).
  • panos.json — file with the ODIs in the order in wich they are displyed. During our subjective tests we generated a panos.js file for each participat, by randomly modifying the order of the ODIs in the file.
  • js folder — Folder holding the javaScript files.
    • main.js : file to be modified if you want your testbed to have different functionalities. For instance, the exposition time for each ODI (showingTime), participant ID and number of training ODIs.
  •  images — folder containing the ODI used for instructions during the test with the messages we use as overlay.
    • Flickr — folder where you will find the ODI used as training and the indoor and outdoor set of ODIs.

Testbed Requirements

You would only need an HMD and a compatible browser. We have used Firefox Nightly, in particular.  In the WebVR website you will find the browser that works the best for each HMD.

Downloads

  1. ODIs metadata: metadata.json
  2. Dataset: dataset.zip
  3. Testbed: testbed.zip

Contact

If you have any problem with the resources provided in this website, you can contact us at: deabreua@scss.tcd.ie

 

QualityVR


Overview:

360-degree video, also called live-action virtual reality (VR), is one of the latest and most powerful trends in immersive media, with an increasing potential for the next decades. In particular, head-mounted display (HMD) technology like e.g. HTC Vive, Oculus Rift and Samsung Gear VR is maturing and entering professional and consumer markets. On the other side, capture devices like e.g. Facebook’s Surround 360 camera, Nokia Ozo and Google Odyssee are some of the latest technologies to capture 360-degree video in stereoscopic 3D (S3D).

However, capturing 360-degree videos is not an easy task as there are many physical limitations which need to be overcome, especially for capturing and post-processing in S3D. In general, such limitations result in artifacts which cause visual discomfort when watching the content with a HMD. The artifacts or issues can be divided into three categories: binocular rivalry issues, conflicts of depth cues and artifacts which occur in both monocular and stereoscopic 360-degree content production. Issues of the first two categories have been investigated for standard S3D content e.g. for cinema screens and 3D-TV. The third category consists of typical artifacts which only occur in multi-camera systems used for panorama capturing. As native S3D 360-degree video production is still very error-prone, especially with respect to binocular rivalry issues, many high-end S3D productions are shot in 2D 360-degree and post-converted to S3D.

Within the project QualityVR, our group is working on video analysis tools to detect, assess and partly correct artefacts which occur in stereoscopic 360-degrees video production, in particular, conflicts of depth cues and binocular rivalry issues.

Publications:

2017

Croci, Simone; Knorr, Sebastian; Smolic, Aljosa

Saliency-Based Sharpness Mismatch Detection For Stereoscopic Omnidirectional Images Inproceedings Forthcoming

14th European Conference on Visual Media Production, London, UK, Forthcoming.

Abstract | Links | BibTeX

Croci, Simone; Knorr, Sebastian; Goldmann, Lutz; Smolic, Aljosa

A Framework for Quality Control in Cinematic VR Based on Voronoi Patches and Saliency Inproceedings Forthcoming

International Conference on 3D Immersion, Brussels, Belgium, Forthcoming.

Abstract | Links | BibTeX

Knorr, Sebastian; Croci, Simone; Smolic, Aljosa

A Modular Scheme for Artifact Detection in Stereoscopic Omni-Directional Images Inproceedings

Irish Machine Vision and Image Processing Conference, Maynooth, Ireland, 2017.

Abstract | Links | BibTeX

Colour Transfer using the L2 Metric

Overview

Colour transfer is an important pre-processing step in many applications, including stereo vision, surface reconstruction and image stitching. It can also be applied to images and videos as a post processing step to create interesting special effects and change their tone or feel. While many software tools are available to professionals for editing the colours and tone of an image, bringing this type of technology into the hands of everyday users, with an interface that is intuitive and easy to use, has generated a lot of interest in recent years.

One approach often used for colour transfer is to allow the user to provide a palette image which has the desired colour distribution, and use it to transfer the desired colour feel to the original target image. This approach allows the user to easily generate the desired colour transfer result without the need for user interaction.

Demo: Colour Transfer Using the L2 metric

It has recently been shown that the L2 metric can be used to create good colour transfer results when the user provides a palette image for recolouring [1]. This technique proposes to model the colour distribution of the target and palette images using Gaussian Mixture Models (GMMs) and registers these GMMs to compute the colour transfer function that maps the colours of the palette image to the target image. It has been shown to outperform other state of the art colour transfer techniques, and can be easily extended to video content. A demo of this colour transfer technique is available here:

https://www.scss.tcd.ie/~groganma/colourTransferDemo.html

In the V-Sense project we are investigating ways to extend this L2 based colour transfer approach to other applications, finding areas in which this robust metric could prove advantageous.

 

References

[1] Robust Registration of Gaussian Mixtures fro Colour Transfer , Mairéad Grogan and Rozenn Dahyot, ArXiv, May 2017.

 

 

 

Demo: Visual Attention for Omnidirectional Images in VR Applications

Overview:

Understanding visual attention has always been a topic of great interest in different research communities. This is particularly important in omnidirectional images (ODIs) viewed with a head-mounted display (HMD), where only a fraction of the captured scene is displayed at a time, namely viewport.

Here, we share a demo that displays a set of ODIs (provided by the user or using the ones available), while it collects the viewport’s center position at every animation frame for each ODI. The data collected is automatically downloaded at the end of the session.

https://www.scss.tcd.ie/~deabreua/visualAttentionVR/

Publications:

2017

Croci, Simone; Knorr, Sebastian; Smolic, Aljosa

Saliency-Based Sharpness Mismatch Detection For Stereoscopic Omnidirectional Images Inproceedings Forthcoming

14th European Conference on Visual Media Production, London, UK, Forthcoming.

Abstract | Links | BibTeX

Croci, Simone; Knorr, Sebastian; Goldmann, Lutz; Smolic, Aljosa

A Framework for Quality Control in Cinematic VR Based on Voronoi Patches and Saliency Inproceedings Forthcoming

International Conference on 3D Immersion, Brussels, Belgium, Forthcoming.

Abstract | Links | BibTeX

Monroy, Rafael; Lutz, Sebastian; Chalasani, Tejo; Smolic, Aljosa

SalNet360: Saliency Maps for omni-directional images with CNN Unpublished

2017.

Abstract | Links | BibTeX

Abreu, Ana De; Ozcinar, Cagri; Smolic, Aljosa

Look around you: saliency maps for omnidirectional images in VR applictions Inproceedings

9th International Conference on Quality of Multimedia Experience (QoMEX), 2017.

Abstract | Links | BibTeX