Light fields capture all light rays passing through a given volume of space.
Compared to traditional 2D imaging systems which capture the spatial intensity of the light rays, 4D light fields also contain the angular direction of light rays.
This additional information allows for multiple applications in different research areas such as image processing, computer vision, and computer graphics, including (but not limited to) the reconstruction of the 3D geometry of a scene, creating new images from virtual point of view, or changing the focus of an image after it is captured.
Light fields are also a growing topic of interest in the VR/AR community.
In V-SENSE, we are currently investigating novel methods for light field denoising, scene reconstruction from light field, and light field rendering.
You can find here examples of light fields captured with a Lytro Illum camera, which allows for refocusing and changing the perspective.
2017
|
Alain, Martin; Smolic, Aljosa Light Field Denoising by Sparse 5D Transform Domain Collaborative Filtering Inproceedings IEEE International Workshop on Multimedia Signal Processing (MMSP 2017), 2017. Abstract | Links | BibTeX @inproceedings{Alain2017,
title = {Light Field Denoising by Sparse 5D Transform Domain Collaborative Filtering},
author = {Martin Alain and Aljosa Smolic},
url = {https://v-sense.scss.tcd.ie/wp-content/uploads/2017/08/LFBM5D_MMSP_camera_ready-1.pdf},
year = {2017},
date = {2017-10-16},
booktitle = {IEEE International Workshop on Multimedia Signal Processing (MMSP 2017)},
abstract = {In this paper, we propose to extend the state-of-the-art BM3D image denoising filter to light fields, and we denote our method LFBM5D.
We take full advantage of the 4D nature of light fields by creating disparity compensated 4D patches which are then stacked together with similar 4D patches along a 5th dimension.
We then filter these 5D patches in the 5D transform domain, obtained by cascading a 2D spatial transform, a 2D angular transform, and a 1D transform applied along the similarities.
Furthermore, we propose to use the shape-adaptive DCT as the 2D angular transform to be robust to occlusions.
Results show a significant improvement in synthetic noise removal compared to state-of-the-art methods, for both light fields captured with a lenslet camera or a gantry.
Experiments on Lytro Illum camera noise removal also demonstrate a clear improvement of the light field quality.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
In this paper, we propose to extend the state-of-the-art BM3D image denoising filter to light fields, and we denote our method LFBM5D.
We take full advantage of the 4D nature of light fields by creating disparity compensated 4D patches which are then stacked together with similar 4D patches along a 5th dimension.
We then filter these 5D patches in the 5D transform domain, obtained by cascading a 2D spatial transform, a 2D angular transform, and a 1D transform applied along the similarities.
Furthermore, we propose to use the shape-adaptive DCT as the 2D angular transform to be robust to occlusions.
Results show a significant improvement in synthetic noise removal compared to state-of-the-art methods, for both light fields captured with a lenslet camera or a gantry.
Experiments on Lytro Illum camera noise removal also demonstrate a clear improvement of the light field quality. |
Chen, Yang; Alain, Martin; Smolic, Aljosa Fast and Accurate Optical Flow based Depth Map Estimation from Light Fields Inproceedings Irish Machine Vision and Image Processing Conference (Received the Best Paper Award), 2017. Abstract | Links | BibTeX @inproceedings{Yang2017,
title = {Fast and Accurate Optical Flow based Depth Map Estimation from Light Fields},
author = {Yang Chen and Martin Alain and Aljosa Smolic},
url = {https://v-sense.scss.tcd.ie/wp-content/uploads/2017/07/Fast-and-Accurate-Optical-Flow-based-Depth-Map-Estimation-from-Light-Fields-5.pdf},
year = {2017},
date = {2017-08-30},
booktitle = {Irish Machine Vision and Image Processing Conference (Received the Best Paper Award)},
abstract = {Depth map estimation is a crucial task in computer vision, and new approaches have recently emerged taking advantage of light fields, as this new imaging modality captures much more information about the angular direction of light rays compared to common approaches based on stereoscopic images or multi-view.
In this paper, we propose a novel depth estimation method from light fields based on existing optical flow estimation methods.
The optical flow estimator is applied on a sequence of images taken along an angular dimension of the light field, which produces several disparity map estimates.
Considering both accuracy and efficiency, we choose the feature flow method as our optical flow estimator.
Thanks to its spatio-temporal edge-aware filtering properties, the different disparity map estimates that we obtain are very consistent, which allows a fast and simple aggregation step to create a single disparity map, which can then converted into a depth map.
Since the disparity map estimates are consistent, we can also create a depth map from each disparity estimate, and then aggregate the different depth maps in the 3D space to create a single dense depth map.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Depth map estimation is a crucial task in computer vision, and new approaches have recently emerged taking advantage of light fields, as this new imaging modality captures much more information about the angular direction of light rays compared to common approaches based on stereoscopic images or multi-view.
In this paper, we propose a novel depth estimation method from light fields based on existing optical flow estimation methods.
The optical flow estimator is applied on a sequence of images taken along an angular dimension of the light field, which produces several disparity map estimates.
Considering both accuracy and efficiency, we choose the feature flow method as our optical flow estimator.
Thanks to its spatio-temporal edge-aware filtering properties, the different disparity map estimates that we obtain are very consistent, which allows a fast and simple aggregation step to create a single disparity map, which can then converted into a depth map.
Since the disparity map estimates are consistent, we can also create a depth map from each disparity estimate, and then aggregate the different depth maps in the 3D space to create a single dense depth map. |