Extended Reality Ulysses (XR Ulysses)
28th April 2020Description
In these extended reality (XR) applications users are invited to enter the world of James Joyce’s Ulysses through the cutting-edge technology of augmented reality (AR) and virtual reality (VR). The VR application will allow audiences from any part of the globe to experience the sites and associated scenes from the story via a VR headset. On the other hand, the AR application will allow audiences to physically go to those locations and witness dramatic recreations of the scenes by using a mobile phone, tablet or head-mounted display. They are prototype MR storytelling applications of a broader project that aims to depict multiple scenes of the book using cutting-edge live-action 3D video capture techniques, known as Volumetric Video (VV) and photographic 3D scenic construction techniques, known as photogrammetry [explained below].
These prototypes involve a recreation of the opening scene of the book, which features a dialogue between Buck Mulligan and Stephen Dedalus (Joyce’s pseudonymous doppelgänger) on the roof of the Martello Tower at Sandycove. The mode of audience interaction consists in the user embodying the character of Stephen Dedalus, and Mulligan positioned as the interlocutor. Therefore, Mulligan is the only character that is seen in the scene. The user is not required to articulate the words of Dedalus: these sections are taken care of by the app itself and are heard as if spoken in the first person, whereas Mulligan’s voice is perceived as coming from the other dialoguing character. The excerpt is a reinterpretation of the section where Mulligan is accused by Stephen of speaking in an insulting manner about the death of Stephen’s mother. Mulligan, a doctor by profession, scrambles to defend his position but his attempt to justify himself, in a speech about the normality of death, only causes further offence to his sensitive listener. Despite being only three minutes long, the speech and dialogue around it are deeply expressive and provide an insight into Joyce’s dexterous handling of language.
Background
In 1922 James Joyce published Ulysses, one of the most widely acclaimed masterpieces of twentieth-century modern literature. Nearly 100 years on, it is still one of the most highly celebrated works of fiction in the world. The book depicts a day in the life of various Dubliners, including Leopold Bloom, an advertising canvasser, his wife Molly and Stephen Dedalus, a young man aspiring to be a writer rather like Joyce himself. The novel was innovative in its use of a range of literary techniques encompassing realism, internal monologue, dramatic expressionism and stream-of-consciousness writing styles.
One notable aspect of the book is its geographic specificity. Set in Dublin on the 16th of June 1904, the story follows the movements of Bloom and his acquaintances around the city of Dublin, and part of its surrounding suburbs. Their journeys include references to several famous landmarks but as the book has grown famous, it has itself transformed many otherwise unremarkable sites into locations of note.
An annual literary tourist pilgrimage which traces locations in the book has emerged. Named Bloomsday, it first emerged in Dublin in 1954 when a group of writers including Flann O’Brien and Patrick Kavanagh traversed the city in a horse-drawn carriage. The tradition involves dressing up in the fashion of the day and retracing the steps of Bloom, visiting many of the sites that are mentioned in Ulysses, eating and drinking and re-enacting scenes from the book. Every year this special celebration attracts Joycean devotees, scholars, and enthusiasts from all over the world. Our MR applications aim to tap into the rich tradition of re-enactment surrounding this annual event, by recreating several of Joyce’s scenes in both VR and AR.
Technical Explanation
The techniques we used for creating the scene and characters are based on volumetric photography and video. They vary between the AR and VR versions because in AR you do not need to recreate the scenes and you use the environment you physically occupy as your scene. This means that the staging (mise-en-scène) must be very carefully considered because the viewing environment must give meaning and context to the action taking place. On the other hand, VR is a three-dimensional (3D) simulation of the real world that can be virtually navigated by an avatar so, the scenic elements must be assembled using 3D software that facilitates world-building – a game engine.
Photogrammetry
In the VR version, the static elements (i.e. the scene and architecture) are reproductions of the rooftop of the real Martello Tower at Sandycove. We used the volumetric photographic process called photogrammetry to recreate the tower rooftop. This process involves taking several hundred photographs of the subject from as many possible angles, and then feeding the photographs into software, which estimates the camera positions and then calculates the volume of the structure. After the volume has been calculated, the software stitches the photos together and textures them on to the 3D volume, producing a photorealistic 3D simulation of the object. There are many commercial and opensource software offerings available for this process. We used an opensource version called Meshroom by AliceVision.
Volumetric Video
The dynamic elements (i.e. the characters/actors) are particular to both (the AR & VR) versions of the app and these are the more cutting-edge elements of the project. The characters are fabricated using the process of volumetric video (VV), which is a recent and growing software innovation that permits the extrapolation of live-action footage to dynamic three-dimensional objects (similar to 3D animated characters). These are then compatible with AR and VR environments because they can be viewed from any desired perspective. There is currently no widely available option for this software process because it is highly complicated and involves several production and postproduction stages that have not yet been consolidated into a single piece of software. The intellectual property is still largely within the domain of computer science research groups and innovative start-up companies. We, at V-SENSE, are championing one method that relies solely on photographic data, whereas other solutions involve depth sensors. This makes our capture rig and data set more lightweight, but postproduction relies more heavily on complicated, processor-intensive computer vision algorithms.
VV technology is disrupting the MR/XR sector because it affords the participation of actors, performers, directors and theatre/film-makers in the VR discourse, which has been conventionally dominated by computer scientists and animators.
Spatial Audio
We use dynamic spatial audio to give the user the perception of embodying Dedalus. The user hears the voice of Stephen as a deep resonant tone, in the same way a speaker perceives their own voice, whereas the voice of Mulligan is perceived as coming from another source, outside the body; that is, it is subject to 1) direction and 2) distance. The spatial audio toolkits for VR and AR technology can be programmed in such a way that 1) when the user looks away from the character, they perceive voice as relational, i.e. it appears louder in one ear than in the other, and 2) when the user walks closer to the audio source it appears louder and when they move further away, it appears quieter. These spatial audio toolkits are also based on relatively cutting-edge technology, which has become deeply significant since its application to MR technologies because it is crucial to the user’s sense of immersion in the story world.
Conclusion
Not everyone can travel to Dublin on Bloomsday and we imagine that using this virtual reality technology to create locations and re-enactments from Ulysses, we can help bring this quintessential work of modern literature to a far wider global audience. In the light of the restrictions on world travel arising from the coronavirus pandemic in 2020, the need to experience literature and intangible cultural heritage in innovative, virtual ways will only continue to grow. We believe that with projects such as Extended Reality Ulysses we are facilitating this need and are well placed to meet the demand.
OUTPUTS / PUBLICATIONS
O’Dwyer, Néill, Gareth W. Young, and Aljosa Smolic. “Extended Reality Ulysses Demo.” In ACM International Conference on Interactive Media Experiences, 237–40. Aveiro JB Portugal: ACM, 2022. https://doi.org/10.1145/3505284.3532816.
O’Dwyer, Néill, Gareth W. Young, and Aljosa Smolic. “XR Ulysses?: Addressing the Disappointment of Cancelled Site-Specific Re-Enactments of Joycean Literary Cultural Heritage on Bloomsday.” International Journal of Performance Arts and Digital Media 18, no. 1 (January 2, 2022): 29–47. https://doi.org/10.1080/14794713.2022.2031801.
O’Dwyer, Neill, Gareth W. Young, Aljosa Smolic, Matthew Moynihan, and Paul O’Hanrahan. “Mixed Reality Ulysses.” In SIGGRAPH Asia 2021 Art Gallery, 1. SA ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3476123.3487880.