Los Angeles, CA (March 23, 2015)—Virtual reality is hardly a new concept, having existed in the imagination, if not in a physical manifestation, for decades. But over recent years, VR has started to enter the popular consciousness via events such as Facebook’s purchase of hardware maker and content creator Oculus for $2 billion and the launch of Google’s inexpensive Cardboard smartphone-driven headset.
Vangelis Lympouridis looks on as musician and humanitarian activist Peter Gabriel experiences Project Syria at the World Economic Forum in Davos, Switzerland. This year’s Sundance Film Festival in January generated plenty of media buzz, with six of the films in the New Frontier category, plus various art installations, being presented in VR. Those included ranged from CGI and live action shorts produced in collaboration with VR headset makers and entities such as the Stan Winston School of Character Arts, to a short spinoff from Twentieth Century Fox’s Wild movie.
But as noted by Vangelis Lympouridis, Ph.D, visiting scholar at the School of Cinematic Arts, University of Southern California, “The first buzz for VR at Sundance was actually back in 2012.” That was the year Lympouridis visited the festival with Nonny de la Peña, a former print journalist, research fellow in immersive journalism at USC, and CEO and co-founder of VR producer Emblematic Group, with her VR short, Hunger in Los Angeles. The six-minute-plus immersive journalism piece recreates the plight of the food poor through the use of CGI visuals—machinima— and a live soundtrack.
Lympouridis met de la Peña through MxR, an immersive research lab that is part of USC’s Institute for Creative Technologies, signing up to work with the director on Project Syria. “My contribution was because I have a Masters in sound design from the University of Edinburgh in Scotland,” explains Lympouridis, a native of Greece.
“The founder of the World Economic Forum came to the lab; he’d seen Hunger in Los Angeles and commissioned us to do a work about Syria to be presented at the WEF in January, 2014,” he recalls. Having settled on an event on which to focus, a mortar attack on a refugee camp, they began searching for first-hand audio.
“What attracted our attention was a video with a little girl singing on camera when a mortar hits next to her—a very intense story. But the video stopped just after the explosion. We didn’t have enough audio material to continue the experience.
“So I searched the internet, and found two videos that were recorded right after the mortar hit. It gave us two things: the sound of the aftermath, and images from the surrounding environment, which we stitched together to create panoramas that we used those to make an exact model of the neighborhood where the incident happened.”
One challenge in VR currently is the need to produce a project on a game engine such as Unity, which unfortunately does not yet handle audio to the satisfaction of a sound designer such as Lympouridis. “As you record surround, you capture a space. If you put that in a game engine, it will try to put it again in the environment of the existing space. That creates all kinds of problems,” he explains. Consequently, “It’s better to use dry sounds that get re-spatialized on the game engine side.”
As Lympouridis teaches his students, “You have to pre-design, pre-compose, pre-mix and do all these cumbersome things that require human care and attention. Machines are not this unbelievable system that can do everything for you. We make the decisions and take care to deliver these things. Machines just represent our intentions.”
The VR environment at USC in which Project Syria was developed enables the observer to walk around a 20-by-20-foot space. “You walk in as the experience begins, then around this neighborhood, where all the sound is localized. Project Syria is a merge of the actual sound from the original videos that we found, then different elements pre-composed and arranged in space in order to increase the sense of presence,” such as cars passing, people talking and birds chirping.
“The singing girl is spatialized, but when the explosion happens, I decided to deliver it in stereo. I took the sound of the original explosion and pasted another explosion on top, and some earthquake sounds that I pitch-shifted lower for a sense of rumble and tension. Then I added a tone, like you hear after an explosion. Everything was carefully pre-mixed and mastered.”
At the recent Sundance festival, he reports, producers Felix & Paul talked about using binaural sound on their short for Fox’s Wild. “They said, ‘Now is the era of the real.’ I could not disagree more. Binaural and microphone arrays are interesting to use in context, but you have to know how to use them. And at the end of the day, the display does use headphones,” rather than a larger, immersive playback environment.
This article appeared in the March, 2015 issue of Pro Sound News as Virtual Reality A Trending Frontier.
In live action, binaural may be appropriate if the context is right, he continues, and there are now assets available, such as libraries of binaural environments and effects. “There are interesting tools, too—one is coming from Two Big Ears. They do smart algorithms that can do binaural representations of sound,” he says.
“But when it comes to synthesized experiences, like we were doing, sound design is the way to go. The important lesson I took from Sundance is that it’s not about the technology; it’s about how you carefully design the sound the same way you design the image.”