Creation of a synthetic light field dataset for deep learning applications

18th October 2018

Proposed by: Martin Alain – alainm at scss.tcd.ie
David Hardman – hardmand at scss.tcd.ie
Yang Chen – cheny5 at scss.tcd.ie

The goal of this project is to create a novel synthetic light field dataset using blender, especially targeted for deep learning applications.

Light fields capture all light rays passing through a given volume of space. Compared to traditional 2D imaging systems which capture the spatial intensity of the light rays, the 4D light fields also contain the angular direction of light rays. This additional information allows for multiple applications such as the reconstruction of the 3D geometry of a scene, creating new images from virtual point of view, or changing the focus of an image after it is captured. Light fields are also a growing topic of interest in the VR/AR community.

However, light field capture is a complex process, thus few datasets are available. On the other hand, deep learning applications are of growing interest in the light field community [1], but the lack of large labeled light field datasets limits the development of such applications. Thus synthetic light fields have been widely used in the community for their convenience compared to light field capture. One recent example of such dataset is the HCI dataset [2] (see the related links below), which uses blender to create light fields along with their ground truth depth map, hence creating a benchmark for depth map estimation from light field.
The goal of this project will be to produce a light field dataset dedicated to light field application that go beyond the simple depth estimation, e.g. intrinsics decomposition [3] or alpha matting [4]. In order to get closer to real capture setup, photo-realistic rendering could also be explored [5,6]. Extra care is expected in choosing/designing the virtual scenes to be rendered, both from a scientific and artistic point of view.
The dataset produced will also have to be tested using existing deep learning approaches, such as [1,3].

[1] Changha Shin, Hae-Gon Jeon, Youngjin Yoon, In So Kweon, Seon Joo Kim; EPINET: A Fully-Convolutional Neural Network Using Epipolar Geometry for Depth from Light Field Images, CVPR 2018
[2] Honauer, Katrin; Johannsen, Ole; Kondermann, Daniel; Goldlücke, Bastian, A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields, ACCV 2016
[3] Anna Alperovich, Ole Johannsen, Michael Strecke and Bastian Goldluecke, Light field intrinsics with a deep encoder-decoder network, CVPR 2018
[4] Sebastian Lutz, Konstantinos Amplianitis, Aljosa Smolic, AlphaGAN: Generative adversarial networks for natural image matting, BMVC 2018
[5] Ley, Andreas; Hansch, Ronny; Hellwich, Olaf; SyB3R: A Realistic Synthetic Benchmark for 3D Reconstruction from Images, ECCV 2016
[6] Wenbin Li, Sajad Saeedi, John McCormac, Ronald Clark, Dimos Tzoumanikas, Ye, Yuzhong Huang, Rui Tang and Stefan Leutenegger; InteriorNet: Mega-scale Multi-sensor Photo-realistic Scenes Dataset, BMVC 2018

Related links:
http://hci-lightfield.iwr.uni-heidelberg.de/
https://github.com/lightfield-analysis/blender-addon
http://andreas-ley.com/projects/SyB3R/
https://interiornet.org/
https://github.com/chshin10/epinet