Super-resolution for omnidirectional images

18th October 2018

Proposed by Aakanksha Rana and Cagri Ozcinar
Email: ranaa at or ozcinarc at

From gaming to robotics, a tremendous interest can be observed for omnidirectional imaging in both academia and industry these days. This immersive scene representation can be captured by omnidirectional multi-camera arrangements and can be rendered through virtual reality headsets which allow the viewers to look around a scene from a central point of view in virtual reality. In this project, the problem of super-resolution in such omnidirectional content will be solved. In layman terms, super-resolution is defined as the task of estimating a high-resolution image from its low-resolution counterpart.

The problem of super-resolution is a well-studied subject for traditional images in literature. However, how to generate a high-resolution omnidirectional image from the captured very low-resolution omnidirectional image is overlooked in the literature. Producing high-resolution content for virtual reality is particularly crucial for achieving reliable performance in computer vision tasks as well as providing enhanced immersive virtual reality experience. In this project, generating a high-resolution omnidirectional image from a captured low-resolution omnidirectional image will be studied using deep learning techniques. Using the adversarial learning techniques, the idea is to learn a structured loss where each predicted output pixel will be conditionally dependent on one or more neighboring pixels in the input image to generate a high resolution high quality image.


360 Image Analysis using Deep Learning


  2. SphereNet: Learning Spherical Representations for Detection and Classification in Omnidirectional Images.

Classical 360 Image Analysis

  1. Super-resolution from unregistered omnidirectional images

2) Joint Registration and Super-Resolution With Omnidirectional Images