Bitrate adaptive quantisation for omnidirectional video with a 3DoF+ capability

18th October 2018

Proposed by Cagri Ozcinar
Email: ozcinarc at scss.tcd.ie<br/ >

Due to its large amount of bandwidth requirement, delivery of omnidirectional video with a 3DoF+ (3 Degrees of Freedom Plus) capability requires an efficient compression techniques. This emerging representation reacts to user head movement along three axes: pitch, yaw, and roll, and also includes limited interactivity based on head movement, such as parallax movement, lighting effect changes, etc.

In this project, texture and depth map for omnidirectional video are required to support 3DoF+ capability. The project aims to enhance the quality of this emerging format for virtual reality applications. In response to a user’s head movement, nearest views will be synthesised using existing views, and optimal quantisation parameters will be estimated for the synthesised view.

In this project, MSc student will learn this emerging media format and develop a pipeline which improves the quality of the synthesised view of the omnidirectional video.

References

[1] Bart Kroon, Miska M. Hannuksela, Renaud Doré, Mary-Luc Champel, Jill Boyce, Common Test Conditions on 3DoF+ Visual, ISO/IEC JTC1/SC29/WG11 MPEG2018/N17467, Jan 2018, Gwangju, Korea.

[2] Renaud Doré, Bart Kroon, Miska Hannuksela, Mary-Luc Champel, Jill Boyce, Call for Test Materials for 3DoF+ Visual, ISO/IEC JTC1/SC29/WG11 MPEG2018/N17471, Jan 2018, Gwangju, Korea.

[3] N17559: Reference View Synthesizer (RVS) 2.0 manual, MPEG123, Ljubjana

[4] N17761: 3DoF+ Software Platform description, MPEG123, Ljubjana