V-SENSE is awarded NEM Summit Art and Design Competition Award for Virtual Play: after Samuel Beckett!

 

We are delighted to announce that V-SENSE was successfully awarded the NEM Art and Design Competition Award for our work on the Virtual Play: after Samuel Beckett. Congratulations to our great team for their excellent work and this great achievement!


Rafael Pagés accepting the award on behalf the
V-SENSE Team!


Rafael Pagés presents Virtual Play: after Samuel Beckett
to the NEM Summit 2017 audience.

This project was made possible by the hard work of our team in collaboration with Neill O’ Dwyer, V-SENSE Creative Collaborator, with Nicholas Johnson, Assistant Professor in the Department of Drama, The School of Creative Arts, and with Enda Bates, Assistant Professor in the Department of Electronic & Electrical Engineering, Trinity College Dublin. Congratulations to all involved!

Our team:

Konstantinos Amplianitis is a Postdoctoral Research Fellow at Trinity College Dublin and a member of the V-SENSE group. Previously, he was a software engineer at Siemens Mobility and Transportation Group in Berlin, Germany. He obtained his Ph.D. degree in 2016 from the Humboldt University of Berlin in the field of Computer Vision and Machine Learning, focusing on 3D Object Recognition. Currently, he is conducting research on segmentation/matting of humans in video sequences using Deep Learning and 3D content creation for VR/AR applications.

Jan Ondrej is Senior Research Fellow at Trinity College Dublin and a member of V-SENSE group. Previously, he was a postdoctoral researcher at Disney Research Los Angeles and Trinity College Dublin. He obtained his Ph.D. in 2011 from INRIA Rennes in France. His current research topics include Free Viewpoint Video and Virtual and Augmented Reality. Other research interests are in Crowd Simulation and Navigation, Human Behaviour Simulation and Character Animation.

Rafael Pagés is a Postdoctoral Research Fellow at Trinity College Dublin. He obtained the Ingeniero de Telecomunicación degree (integrated B.Sc. and M.Sc. Accredited by ABET) and the Doctor Ingeniero de Telecomunicación degree (Ph.D. in Communications) from the Universidad Politécnica de Madrid (UPM), Spain, in 2010 and 2016, respectively. Since 2016, he has been member of V-SENSE group at Trinity College Dublin. His research interests include 3D reconstruction, free-viewpoint video, VR/AR, computer vision, and image processing.

Congratulations also to Soundcool, to Tim Murray Browne, and Virtual White Cube for their entries!

Dr. Amplianitis presents at 3DCamp!

 


 

 

On the 23rd November 2017, Dr. Amplianitis presented at 3DCamp. In this talk the Konstantinos Amplianitis presented his team’s research work on dynamic content creation for VR/AR applications developed within the V-SENSE project at Trinity College Dublin. Their technology is able to generate a volumetric hologram of a performance from a set of video sequences recorded by a multi camera setup. The content can then be imported and visualised in different VR/AR devices.

The system is quite scalable, works with different camera configurations, outdoor or indoors scenarios and can even generate content from videos captured with handheld consumer quality devices.

Konstantinos Amplianitis is a Postdoctoral Research Fellow at Trinity College Dublin and a member of the V-SENSE group. Previously, he was a software engineer at Siemens Mobility and Transportation Group in Berlin, Germany. He obtained his Ph.D. degree in 2016 from the Humboldt University of Berlin in the field of Computer Vision and Machine Learning, focusing on 3D Object Recognition. Currently, he is conducting research on segmentation/matting of humans in video sequences using Deep Learning and 3D content creation for VR/AR applications.

Jan Ond?ej is Senior Research Fellow at Trinity College Dublin and a member of V-SENSE group. Previously, he was a postdoctoral researcher at Disney Research Los Angeles and Trinity College Dublin. He obtained his Ph.D. in 2011 from INRIA Rennes in France. His current research topics include Free Viewpoint Video and Virtual and Augmented Reality. Other research interests are in Crowd Simulation and Navigation, Human Behaviour Simulation and Character Animation.

Rafael Pagés is a Postdoctoral Research Fellow at Trinity College Dublin. He obtained the
Ingeniero de Telecomunicación degree (integrated B.Sc. and M.Sc. Accredited by ABET) and the Doctor Ingeniero de Telecomunicación degree (Ph.D. in Communications) from the Universidad Politécnica de Madrid (UPM), Spain, in 2010 and 2016, respectively. Since 2016, he has been member of V-SENSE group at Trinity College Dublin. His research interests include 3D reconstruction, free-viewpoint video, VR/AR, computer vision, and image processing.

For more information on the event programme, please inspect here.

Professor Smolic is invited to talk at the NEM Summit 2017 in Madrid!


Professor Smolic presents his talk on Content Creation for
VR, AR and Free Viewpoint Video at NEM Summit 2017!

 

On Wednesday, 29th November, 2018, Professor Smolic was an invited speaker at the  NEM Summit 2017 in Madrid. The Summit took place at the beautiful Museo Reina Sofía, Madrid. Professor Smolic was joined by Andreas Ehret, Vice President of Broadcast Technology, Dolby, Harini Hariharan, Saarland Informatics Campus, Saarland University and many more inspiring speakers. Further information on the NEM Summit can be viewed here.

V-SENSE was successfully selected as a finalist for the NEM Art and Design Competition for our work on the Virtual Play: after Samuel Beckett. Congratulations to the team for their excellent work and getting to the finals! It is an opportunity for Innovative Technologists and Artists from Europe to submit their own creations, utilizing new technologies such as Immersive (Augmented Reality / Virtual Reality), Artificial Intelligence & Machine Learning, Digital Manufacturing (e.g. 3D Printing), Internet of Things and Blockchain / Smart Contracts to illustrate how life and creative businesses can benefit from emerging technologies, inspiring new applications for tomorrow, and developing new opportunities for imaginative creativity.


Virtual Play: after Samuel Beckett

The NEM Initiative (New European Media Initiative) was established as one of the European Technology Platform under the Seventh Framework Programme, aiming at fostering the convergence between consumer electronics, broadcasting and telecoms in order to develop the emerging business sector of networked and electronic media. In order to respond to new need and requirements of the Horizon 2020 programme, the NEM initiative enlarged its focus towards creative industries and changed its name from Networked an Electronic Media Initiative to New European Media, dealing with Connected, Converging and Interactive Media & Creative Industries, driving the future of digital experience.

The NEM constituency includes all major European organisations working in the networked and electronic media area, including content providers, creative industries, broadcasters, network equipment manufacturers, network operators and service providers, academia, standardisation bodies and government institutions. Those actors share a common Vision and have been producing a Strategic Research and Innovation Agenda (SRIA) as well as position papers, in order to accelerate the innovative development of the new sector in a harmonised and fruitful way and to place European industry at the forefront of the information era.

NEM focuses on an innovative mix of various media and creative content forms, delivered seamlessly over technologically transparent networks, to improve the quality, enjoyment and value of life. NEM represents the convergence of existing and new technologies, including broadband, mobile and new media, including creativity, across all sectors, to create a new and exciting era of advanced personalised services.

V-SENSE participates at the Huawei Ireland Research Summit 2017!

On Tuesday, 21st November, Professor Smolic presented on Closing the content gap for VR/AR at the Huawei Ireland Research Summit 2017.

Other inspiring speakers included Simon Robinson (Co Founder & Chief Scientist – The Foundry), Dr. Andrea Cavallaro (Queen Mary University of London), Dr. Tao Xiang (Queen Mary University of London), Dr. Josephine Sullivan (KTH Royal Institute Of Technology).

Huawei is a leading global information and communications technology (ICT) solutions provider. Driven by a commitment to sound operations, ongoing innovation, and open collaboration, we have established a competitive ICT portfolio of end-to-end solutions in telecom and enterprise networks, devices, and cloud technology and services. Our ICT solutions, products, and services are used in more than 170 countries and regions, serving over one-third of the world’s population. With 180,000 employees, Huawei is committed to enabling the future information society, and building a Better Connected World.

Professor Smolic presents at the The Science Foundation Ireland Science Summit 2017!

The Science Foundation Ireland Science Summit 2017 took place on Monday 13th November.

Professor Smolic presents on research excellence at the SFI Science Summit 2017 in Croke Park, Dublin. Picture Jason Clarke

This year the summit focused on the theme #BelieveInScience. The Summit provided a unique opportunity for stakeholders to gain awareness of the collective output and excellent scientific research funded by Science Foundation Ireland.

SFI Showcasing Research Excellence –  Professor Smolic addressed the summit and  participated in a panel discussion with UNESCO-L’Oreal International Woman in Science Awardees, Prof Lydia Lynch (TCD) and Dr Annie Curtis (RCSI) and Michael Zaworotko (UL), chaired by Dr Darrin Morrissey (SFI).

SFI Science Summit 2017 in Croke Park, Dublin. Picture Jason Clarke

This year the Science Foundation Ireland hosted keynote speakers, Marcus du Sautoy, the Charles Simonyi Professor for the Public Understanding of Science at Oxford University and Adrian Raftery, Professor of Statistics and Sociology at the University of Washington in Seattle, and a winner of the 2017 Science Foundation Ireland St. Patrick’s Day Science Medal.

 

 

V-SENSE participates in The International Conference & Exhibition on Visual Entertainment (ICEVE) 2017, Beijing,China!

Our colleague, Sebastian Knorr (pictured), spoke at the The International Conference & Exhibition on Visual Entertainment (ICEVE).

The ICEVE comprised two-days of exploration, discussion and demonstration of cutting-edge production tools, techniques and technology spanning film,animation, visual effects and immersive media.

Sebastian presented an overview of our work here at V-SENSE, titled Extending Visual Sensation Through Image-Based Visual Computing, and participated in the panel discussion on the The Future of Virtual Reality.

For more information on the conference agenda and conference speakers, please inspect here.

We are hiring! Research Fellow (Postdoc) in Creative Technologies

Post Summary

 

The post will research, develop, pilot and demonstrate a set of professional tools and techniques for making content ‘smarter’, so that it is fully adaptive in a broad, unprecedented manner: adaptive to context (which facilitates re-use), to purpose (among or within industries), to the user (improving the viewing experience), and to the production environment (so that it is ‘future-proof’). The approach is based on research into computer animation; automated classification and tagging using deep learning and semantic labelling to describe and draw inferences; and the development of tools for automated asset transformation, smart animation, storage and retrieval. These new technologies and tools will show that a vast reduction of costs and increases in efficiency are possible, facilitating the production of more content, of higher quality and creativity, for the benefit of the competitiveness of the European creative industries.

Specifically, the position will help to research and develop:

  • A framework and tools for automatically classifying, validating and finding smart assets, using deep learning and semantic labelling techniques 3D
  • A framework and tools for the automatic transformation and adaptation of smart assets to new contexts, purposes, users and environments, and for the synthesis of new smart assets from existing
  • Real-time control systems for authoring animated content using smart assets, automatically synthesizing new scenes from existing ones and integrating smart assets into virtual production

 

Standard Duties and Responsibilities of the Post

The role of the candidate will be to:

  • Develop algorithms for populating crowded scenes based on previously created examples. It will involve close collaboration with other consortium partners on developing assets descriptors using Deep Learning, and using those to populate new scenes with naturally behaving characters.
  • Fundamental and/or applied research in Computer Animation at the intersection of Crowd Simulation, Character Animation and Deep Learning
  • Scientific publications
  • Contribution to prototype and demonstrator development
  • Overall contribution to SAUCE and teamwork
  • Supervision of PhD and other students
  • Outreach & dissemination

Funding Information

This project is funded as part of the Horizon 2020 – the Framework Programme for Research and Innovation (2014-2020).
Person Specification

Qualifications

The candidate should have a Ph.D. degree in Computer Science, Engineering, or a related field in the area of ICT with a focus on computer animation. Experience in the area of crowd simulation or character animation is highly desirable.

Knowledge & Experience (Essential & Desirable)

  • An established track record of publication in leading journals and/or conferences, in one or more sub-areas of Computer Animation.
  • Excellent knowledge of and integration in the related scientific communities.
  • The ability to work well in a group, and the ability to mentor junior researchers, such as Ph.D. students.
  • Knowledge of Machine Learning and Deep Learning is highly desirable.
  • Experience with motion capture is welcome.

Skills & Competencies

  • Good written and oral proficiency in English (essential).
  • Effective communication and interpersonal skills both written and verbal.
  • Proven aptitude for Programming, System Analysis and Design.
  • Proven ability to prioritise workload and work to exacting deadlines.
  • Proven track record of publication in high-quality venues.
  • Flexible and adaptable in responding to stakeholder needs.
  • Dedicated team player who can take responsibility to contribute to the overall success of the team.
  • Enthusiastic and structured approach to research and development.
  • Excellent problem-solving abilities. – Desire to learn about new products, technologies and keep abreast of new product and technical and research developments.


Deadline: 30th November, 2017

Further Information for Applicants
Informal enquiries to Professor Aljosa Smolic: smolica@scss.tcd.ie

To Apply:
To apply please email a brief cover letter describing relevant experience and a PDF copy of your CV along with names and contact information for 2 referees to smolica@scss.tcd.ie

 

 

URL Link to V-SENSE https://v-sense.scss.tcd.ie/
URL Link to The School of Computer Science www.scss.tcd.ie
URL Link to Human Resources https://www.tcd.ie/hr/

 

Join our team – we are hiring! New Research Fellow in Creative Technologies!

The post will research, develop, pilot and demonstrate a set of professional tools and techniques for making content ‘smarter’, so that it is fully adaptive in a broad, unprecedented manner: adaptive to context (which facilitates re-use), to purpose (among or within industries), to the user (improving the viewing experience), and to the production environment (so that it is ‘future-proof’). The approach is based on research into computer animation; automated classification and tagging using deep learning and semantic labelling to describe and draw inferences; and the development of tools for automated asset transformation, smart animation, storage and retrieval. These new technologies and tools will show that a vast reduction of costs and increases in efficiency are possible, facilitating the production of more content, of higher quality and creativity, for the benefit of the competitiveness of the European creative industries.

All important information is here!

Seminar presentation by Professor Julián Cabrera Quesada, Universidad Politécnica de Madrid

 

 

 

 

 

 

 

 

Title:

Stochastic optimal control of HTTP Adaptive streaming


Abstract:

HTTP Adaptive Streaming (HAS) is becoming a key technology for audiovisual broadcasting through IP networks. This technology has been adopted and developed by important vendors such as Microsoft, Apple or Adobe and the creation of an MPEG standard (MPEG-DASH) has also contributed to its success for multimedia broadcasting.  Important IP content providers such as Netflix, Amazon, HBO etc., are using this technology for their video on-demand services, and traditional IPTV providers such us Movistar TV,  are also moving to this technology for their live broadcasting services.

One of the key elements in HAS technology is the player at the client side which has to make decisions in order to provide the best possible video quality. These decisions have to consider the dynamic network conditions, the device features and the user ?s profile and preferences. In this talk, the behaviour of the player will be described and formulated as a Markov decision problem and solutions based on Stochastic Dynamic Programming and Reinforcement learning will be presented.

Short Bio:

V-SENSE is delighted to welcome Professor Julián Cabrera Quesada as Visiting Professor until July 2018. Professor Julián Cabrera Quesada is Associate Professor in Signals, Systems and Radiocommunications at the Department at the Telecommunication School of the Universidad Politécnica de Madrid (UPM) and Researcher at Image Processing Group (Grupo de Tratamiento de Imágenes). He lectures in Digital Image Processing, Transmission systems, Digital Television, Video Coding, Audiovisual Communications, and Reinforcement Learning.  He has participated in more than 25 research projects funded by European programs, Spanish national programs and private companies. Current research interests cover several topics related to audio-visual communications, advance video coding for UHD, 3D and Multiview scenarios, depth estimation and coding, video subjective quality assessment for Multiview  and VR360 video, and  optimization of adaptive streaming techniques. He is working on the application of deep learning approaches to depth estimation and 3D reconstruction.

Event information:

12-1pm, Tuesday, 24th Oct 2017
Large Conference Room, O’ Reilly Institute