Extending improvements in visual saliency estimation to omnidirectional images
18th October 2018Proposed by: Sebastian Lutz, Tejo Chalasani, Koustav Ghosal
The goal of visual saliency estimation is to predict which parts of an image humans are most likely to look at. There has been much research done in this area for traditional 2D images, but visual saliency estimation for omnidirectional images is still a new research direction. Recently, we introduced a Deep Learning architecture [1] that could be added to any Base CNN trained for traditional 2D images and that would improve the results for omnidirectional images. However, the Base CNN used in this paper is relatively basic and can not compare to the state-of-the-art in traditional 2D saliency estimation [2].
The goal of this project would be to extend our approach to work with a state-of-the-art CNN as base and incorporate other pre- and postprocessing steps that are used for traditional 2D saliency estimation to be used for omnidirectional images.
References:
[1] Rafael Monroy, Sebastian Lutz, Tejo Chalasani and Aljosa Smolic. “SalNet360: Saliency Maps for omni-directional images with CNN.” arXiv preprint arXiv:1709.06505 (2017)
[2] Bylinskii, Zoya, et al. “MIT saliency benchmark.” (2015): 402-409. http://saliency.mit.edu/results_mit300.html