Deep-learning based music localization in VR using audio-visual cues

4th June 2019
Deep-learning based music localization in VR using audio-visual cues

Proposed by Aakanksha Rana and Cagri Ozcinar Email: ranaa at scss.tcd.ie or ozcinarc at scss.tcd.ie

 

Are you interested in developing solutions for automatic localization in a VR musical scenario (Jam Sessions, Chamber Music, etc.). The project aims to develop multi-modal deep learning solutions for automatically detecting the sound source location in a musical VR environment using the audio-visual information.

 

An exemplary music 360-Video: https://www.facebook.com/ImmersiveAudioGroup/videos/1849985398597654/

 

Requirement:

Basic understanding of Deep-learning,

Strong Python programming skills with knowledge of Pytorch/tensorflow tools

Idea candidate should have an interest in music and VR in general and must have the ability to learn new tools and knowledge.