Seminar presentation by Professor Julián Cabrera Quesada, Universidad Politécnica de Madrid

 

 

 

 

 

 

 

 

Title:

Stochastic optimal control of HTTP Adaptive streaming


Abstract:

HTTP Adaptive Streaming (HAS) is becoming a key technology for audiovisual broadcasting through IP networks. This technology has been adopted and developed by important vendors such as Microsoft, Apple or Adobe and the creation of an MPEG standard (MPEG-DASH) has also contributed to its success for multimedia broadcasting.  Important IP content providers such as Netflix, Amazon, HBO etc., are using this technology for their video on-demand services, and traditional IPTV providers such us Movistar TV,  are also moving to this technology for their live broadcasting services.

One of the key elements in HAS technology is the player at the client side which has to make decisions in order to provide the best possible video quality. These decisions have to consider the dynamic network conditions, the device features and the user ?s profile and preferences. In this talk, the behaviour of the player will be described and formulated as a Markov decision problem and solutions based on Stochastic Dynamic Programming and Reinforcement learning will be presented.

Short Bio:

V-SENSE is delighted to welcome Professor Julián Cabrera Quesada as Visiting Professor until July 2018. Professor Julián Cabrera Quesada is Associate Professor in Signals, Systems and Radiocommunications at the Department at the Telecommunication School of the Universidad Politécnica de Madrid (UPM) and Researcher at Image Processing Group (Grupo de Tratamiento de Imágenes). He lectures in Digital Image Processing, Transmission systems, Digital Television, Video Coding, Audiovisual Communications, and Reinforcement Learning.  He has participated in more than 25 research projects funded by European programs, Spanish national programs and private companies. Current research interests cover several topics related to audio-visual communications, advance video coding for UHD, 3D and Multiview scenarios, depth estimation and coding, video subjective quality assessment for Multiview  and VR360 video, and  optimization of adaptive streaming techniques. He is working on the application of deep learning approaches to depth estimation and 3D reconstruction.

Event information:

12-1pm, Tuesday, 24th Oct 2017
Large Conference Room, O’ Reilly Institute

Closing the Content Gap for AR and VR presented by Professor Smolic, SFI Research Professor of Creative Technologies at Trinity College

Technology Ireland Innovation Forum/Ibec with Trinity College presents Closing the Content Gap for AR and VR on Wednesday, 18th October 2017


Virtual Reality and Augmented Reality are already impacting global business. Historically associated with the entertainment industry, business is now embracing Virtual, Augmented and Mixed Reality to gain 360 degree insights into various aspect of their organisations. This coupled with applications in everyday societal needs, is driving new opportunities and markets globally.

Professor Aljosa Smolic the SFI Research Professor of Creative Technologies at Trinity College, is a recognised leader in his field of visual computing, with particular expertise in computer graphics, computer vision, and video signal processing. His work has had a significant practical impact on creative industries in the areas of 3D video technology, forming the basis for 3D Blu-ray and stereo 3D production technology.

So why attend:
Aljosa will talk about his research project, V-SENSE – Extending Visual Sensation through Image-Based Visual Computing, which is funded by SFI over 5 years. V-SENSE is a team of 20+ researchers in Visual Computing at the intersection of Computer Vision, Computer Graphics and Media Signal Processing. It is building a dynamic environment where enthusiastic young scientists with different backgrounds get together to shape the future in fundamental as well as applied research projects. Insights include but are not limited to:
Augmented, Virtual and Mixed Reality (AR/ VR/ MR)
Free View-point Video (FVV)
360-Video
Light-Field Technologies
Visual Effects and Animation

Date: 18 October 2017
Time(s): 0800-1030
Venue: Ibec, 84-86 Lower Baggot Street, Dublin 2

V-SENSE at the Intermedial Beckett Symposium Saturday, 14 October 2017, 11am – 7pm

Samuel Beckett changed the theatre forever by using the new media of his time. Since his death in 1989, the analogue stage and screen technologies of the 20th century have given way to various forms of digital telepresence, and experiments in translating Beckett across media abound. In partnership with the Dublin Theatre Festival and the Trinity Long Room Hub, The Trinity Centre for Beckett Studies will curate a day of presentations, conversations, and lectures by leading experts and artists to discuss the impact of intermedial performance, contemporary art, and Beckett’s legacy.

Schedule:
11:00-1:00: Beckett and Theories of Intermediality
Anna McMullan — “Samuel Beckett: Intermedial Legacies”
David Houston Jones — “Samuel Beckett: Face, Installation, Embodiment”
Panel Discussants: Matthew Causey, Derval Tubridy, Catherine Laws
Chair: Nicholas Johnson

1:00-2:00: Lunch (provided)
Virtual Play installation in the Hoey Ideas Space

2-3:30: Beckett in Virtual Reality
Panel discussion with the members of the V-SENSE research project
Aljosa Smolic, Néill O’Dwyer, Enda Bates, Nicholas Johnson

3:30-4:00: Coffee (provided)

4:00-6:00: Beckett and Practices of Intermediality
Derval Tubridy — “Intermediality, Agency, Diversity”
Catherine Laws — “Beckett, Music, and the Intermedial”
Discussants: Ciaran Clarke, Angela Butler, Anna McMullan
Chair: Julie Bates

6:00 PM: Launch of the Trinity Centre for Beckett Studies
Jane Ohlmeyer, director of the Trinity Long Room Hub
Sam Slote, director of the Trinity Centre for Beckett Studies

The symposium is hosted by Dr Nicholas Johnson, Prof David Houston Jones, Dr Catherine Laws, Prof Anna McMullan, Dr Sam Slote, and Dr Derval Tubridy. Kindly supported by interdisciplinary seed funding from the Trinity Long Room Hub, and in partnership with the Dublin Theatre Festival, the School of Creative Arts, and V-SENSE (SFI-funded project held by Prof. Aljosa Smolic).

This event is free but does require registration- you can register for this event here

Campus LocationTrinity Long Room Hub
Accessibility: Yes
Room: Neill Lecture Theatre
Event Type: Alumni, Arts and Culture, Conferences, Lectures and Seminars, Public
Type of Event: One-time event
Audience: Undergrad, Postgrad, Alumni, Faculty & Staff, Public
Cost: Free (but registration is required)

 

Join us at Probe with Samuel Beckett’s Play in Virtual Reality!

Samuel Beckett’s Play in Virtual Reality @ Probe
(Trinity Research Night).

V-SENSE present their inaugural creative-cultural experiment Virtual Play at Trinity Research Night (aka Probe), in the Maker

Marquis at Front Square. Visitors can use a virtual reality headset to interact with Beckett’s story, re-imagined for digital culture. The project demos cutting-edge free-viewpoint video

technology. Participation from the departments of Drama and Electrical and Electronic Engineering represent a significant interdepartmental collaborative effort.

Where: Trinity Research Night (aka Probe), in the Maker Marquis at Front Square.

When: 5pm – 8pm, Friday 29 September 2017.

Admission: Free all over campus.

Congratulations Yang Chen on receiving The Best Paper Award at The Irish Machine Vision and Image Processing Conference, 2017.

We are delighted to announce that our paper: Chen, Yang; Alain, Martin; Smolic, Aljosa, Fast and Accurate Optical Flow based Depth Map Estimation from Light Fields received the Jonathan Campbell Best Paper Award at The Irish Machine Vision and Image Processing Conference, 2017. Huge congratulations to our PhD student Yang and all involved in this successful outcome! https://v-sense.scss.tcd.ie/?page_id=349


PhD student Yang presenting at the conference.

Intermedial Play/Virtual Play, in Association with Samuel Beckett Summer School

A public presentation of Virtual Play, the inaugural V-SENSE Creative Technologies project.

LOCATION:

Arts Technology Research Laboratory (ATRL)

DATE AND TIME:

Tue 1 August 2017, 18:30 – 20:00 IST

RESERVE TICKETS ON EVENTBRITE

DESCRIPTION:

“Are you listening to me? Is anyone listening to me? Is anyone looking at me? Is anyone bothering about me at all?”

Intermedial Play/Virtual Play is an exploration of Samuel Beckett’s Play (1963), re-imagined for digital culture. This event will present cutting-edge interdisciplinary research into Beckett in performance underway at Trinity College Dublin, showcasing a screened live-stream of Play (sent via a PTZ robotic camera into ATRL) and the initial 3D renderings from Play in virtual reality (VR). A practice-as-research collaboration between the School of Creative Arts/Trinity Centre for Beckett Studies, the School of Computer Science and Statistics, and the Department of Electrical and Electronic Engineering, this project is led by Néill O’Dwyer (V-SENSE), Nicholas Johnson (Creative Arts), and Enda Bates (Music Media Technologies).

In Beckett’s script, three heads that are “lost to age and aspect” are provoked into speech by a light Beckett described as an “interrogator,” and then forced to repeat the sorry tale of their love triangle (apparently into infinity). The play inaugurated a new approach in Beckett’s theatre and opened the period of his later, image-based works. In its structure, Play seems to question the very essence of the act of performance, but it does so via the technologies of Beckett’s day (remotely controlled theatrical lighting). Responding to Beckett’s deep engagement with the problems of performance and its interaction with technology, this project investigates how narrative, perception, communication, and embodiment have all been altered through contemporary media, and asks how they might operate in the media of the future.

This is the inaugural artistic-cultural experiment under the creative technologies remit of the V-SENSE research project (funded by Science Foundation Ireland), held by Prof. Aljosa Smolic. It features virtuoso performances by Colm Gleeson, Maeve O’Mahony, and Caitlin Scott. Following the viewing of the live-stream of Intermedial Play, the team will demonstrate the virtual reality technology for Virtual Play, and after questions, participants may experience the virtual realm themselves.

V-SENSE talk: Dr. Ronny Hänsch, Post-Doctoral researcher, TU Berlin

Speaker: Dr. Ronny Hänsch, Post-Doctoral researcher, TU Berlin

Venue: Maxwell Theatre in the Hamilton Building

Date & Time: Thursday 1st June at 11am

Title: From 2D to 3D – and back

Abstract: The work of the Computer Vision and Remote Sensing Department of the Technical University Berlin spans a wide range of research areas including traditional and more exotic methods of 3D reconstruction (i.e. TomoSAR – the estimation of 3D information from radar images), single image depth estimation, the usage of prior knowledge for shape completion and correction, shape abstraction, as well as object detection in 3D data.

After a short overview about recent works of the department, a small selection of methods is discussed in detail. The first part introduces a multi-view stereo approach with slightly changed data acquisition and pre-processing which leads to astonishing results over weakly textured areas. The second part addresses two examples to further process point cloud data: Near-real time object detection and shape abstraction. The last part of the talk covers an approach to evaluate structure from motion and multi-view stereo methods by synthetic images that – besides being photo-realistic – contain many image characteristics of real cameras.

 

Speaker Bio: Ronny Hänsch received the Diploma degree in computer science and the Ph.D. degree from the Technische Universität Berlin, Berlin, Germany, in 2007 and 2014 respectively. His research interests include computer vision, machine learning, object detection, neural networks, and Random Forests. He worked in the field of object detection and classification from remote sensing images, with a focus on polarimetric synthetic aperture radar images. His recent research interests focus on the development of probabilistic methods for 3D reconstruction by structure from motion as well as ensemble methods for image analysis.

Dr Laura Toni talk Invitation: Wednesday 26th April at 2pm

Speaker: Dr Laura Toni, Lecturer University College London

Venue: Large Conference Room, O’Reilly Institute

Date & Time: Wednesday 26th April at 2pm

Title: Navigation-Aware Communications for Interactive Multiview Video Systems

Abstract: Recent advances in video technology have moved the research toward novel interactive Multiview services, such as 360-degree videos, and virtual reality, where users can actively interact with the scene. Because of their interactivity, users are no longer seen as mere terminals interconnected by links but active players in the communication with all possible levels of interactivity: from passively consuming contents, to actively crafting ones own media stream and social experience. This user-centric paradigm calls for adaptation of streaming policies to both the nature of the content of the communication and the social dynamics among users to face the astonishing diversity of novel networks.

In this talk, we will provide an overview on adaptive communication paradigms that need to be designed to cope with both the massive traffic of Multiview data and the interactivity level of the users. Then, we will describe in more details novel navigation-aware frameworks for optimal coding, streaming, and Multiview processing in adaptive streaming processes. We conclude with a perspective on open challenges in the field on 360-videos, stressing in particular the need to learn in real time users’ behaviours to optimally design future interactive streaming systems

Speaker Bio: Laura Toni received the M.S. and Ph.D. degrees in electrical engineering, from the University of Bologna, Italy, in 2005 and 2009, respectively. In 2007, she was a visiting scholar at the University of California at San Diego (UCSD), CA, and since 2009, she has been a frequent visitor to the UCSD, working on media coding and streaming technologies.

Between 2009 and 2011, she worked at the Tele-Robotics and Application (TERA) department at the Italian Institute of Technology (IIT), investigating wireless sensor networks for robotics applications. In 2012, she was a Post-doctoral fellow at the UCSD and between 2013 and 2016 she was a Post-doctoral fellow in the Signal Processing Laboratory (LTS4) at the Swiss Federal Institute of Technology (EPFL), Switzerland.

Since July 2016, she has been appointed as Lecturer in the Electronic and Electrical Engineering Department of University College London (UCL), UK.

Her research mainly involves interactive multimedia systems, decision-making strategies under uncertainty, large-scale signal processing and communications.

Launch of the Trinity Centre for Creative Technologies & Media Engineering (CHIME) Thursday 23rd February

Provost, Patrick Prendergast, opened the Trinity Centre for Creative Technologies & Media Engineering (CHIME) on Thursday 23rd February. The event took place in the newly refurbished Stack B in the IFSC.

CHIME hosts 14 academic staff and 50 full time researchers from two internationally reknowned research groups: Graphics, Vision and Visualization (GV2) in the School of Computer Science and Statistics (SCSS); and Media Signal Processing (Sigmedia) in the School of Engineering.

SCSS recently recruited SFI Research Professor of Creative Technologies, Aljosa Smolic, who co-directs the Centre, along with EE Professor Anil Kokaram, Head of Sigmedia. Prof. Smolic’s research project, V-SENSE, aims to Extend the Dimensions of Visual Sensation through novel algorithms and workflows for Image-Based Visual Computing expanding the classical 2D video viewing experience common today, and to enable and support new consumer behaviours and preferences in consumption and creation of such content. SCSS Professor of Visual Computing, Carol O’Sullivan, has also recently returned from a career break in Seoul National University and Disney Research Los Angeles to resume her role as Head of GV2 and to continue her research on Animation and Perception. Together they lead a team of world-class researchers who are continuing to build on the considerable achievements of both research groups, which include high-impact publications and successful spin-outs, including Havok, SureWash and Green Parrot Pictures (acquired by Google).

Commenting at the event, Provost Patrick Prendergast said: “The Centre’s ambition is to establish Trinity as a world class centre of excellence in Creative Technologies & Media Engineering, and to push the boundaries of visual and auditory processing. That’s a high ambition but look what’s already been achieved! With the talent, leadership and commitment here, I have full confidence.