GameSoundCon Ponders Realities of VR - ProSoundNetwork.com

GameSoundCon Ponders Realities of VR

LOS ANGELES, CA—The twelfth annual GameSoundCon video game music and sound design conference returned to Los Angeles at the end of September, presenting five concurrent session tracks, including a new full day on Virtual and Augmented Reality plus a day focusing on academic and research topics.
Author:
Publish date:
Social count:
0

LOS ANGELES, CA—The twelfth annual GameSoundCon video game music and sound design conference returned to Los Angeles at the end of September, presenting five concurrent session tracks, including a new full day on Virtual and Augmented Reality plus a day focusing on academic and research topics. This year, the two-day event, organized by industry veteran Brian Schmidt, executive director of GameSoundCon, featured more than 40 top names from the world of game music and sound and also added new lunchtime roundtables entitled Women in Game Audio and Game Audio Education.

Christopher Hegstrom, formerly with LucasArts, Electronic Arts, Sony Computer Entertainment and Microsoft, got the Virtual Reality Audio track underway, talking about past, present and future VR audio workflows. “Anybody who works in VR audio right now will tell you that it’s the Wild West,” he said, adding that there’s a lot of hardware, middleware, software and players currently trying to solve some really hard problems, and “Not everybody has the right answers.”

VR experiences are not necessarily linear or interactive but may rather be multilayered. “The user is going to miss something, because you can’t force them to look at stuff,” he said. But, in any interactive experience, “Sound is a huge tool. It can guide the user.”

Stereo is crowded and 3D audio is sparse, he continued. Sound design for VR must typically be done in 2D, referencing a flat screen, which is not ideal.

“We are borrowing from academia for VR. That’s the best way we have of understanding these concepts. We’re borrowing spatial integration from games,” from engines such as Unity and Unreal, he said.

Debugging 3D audio in games is difficult, Hegstrom observed. “Everything is alpha or beta; everything.” Also beware of headphones that color the audio, such as Beats. “And don’t ever review with noise-canceling headphones,” he added.

Looking to the future, Hegstrom offered a long list of predictions: “I think 3D audio can transcend VR with head-tracking speakers. VR is going to get social at some point; sharing VR is the only way it’s going to hit critical mass. Once we’re working in VR, the keyboard and mouse are going to seem antiquated; it’s going to be really cool once we’re free of these tethers. I think VR is going to be a Trojan horse for 3D music. It could potentially jumpstart home listening again.”

Scott Gershin and Viktor Phoenix from Technicolor presented some advanced audio techniques during their “Beyond 360” session. “We can create some incredibly realistic and immersive environments, but it doesn’t mean that you should,” said Phoenix. “It’s important to define the rules of your environment, but it’s almost as important to know when to break those rules.”

The whole point is to make the audience feel that the illusion is real, said Gershin. “Sometimes reality is incredibly boring. But a movie is not the way you hear—it’s exaggerated. We have choices to make. VR is not TV, movies or games. It’s a new way to tell a story. I think VR is closer to doing a play than doing TV.”

Because VR is such a new experience, the first thing that a viewer typically does after donning the headset is to twirl around. The equivalent in television is the establishing shot, which shows the viewer where the action is taking place. “Audio is going to give you that style. It’s going to give you information as to where you are,” Gershin predicted.

Concluding, Gershin commented, “VR right now is a lot of gimmick. But now we get to say, let’s tell a story. I think that’s exciting.”

Composer Michael Bross shared his experiences writing three hours of music, of which two and a half hours were used, for the Insomniac Games VR title, Edge of Nowhere. The score blends what Bross calls “designed atmospherics”— heavily processed and pitch-shifted guitar feedback, piano, cymbal, violin and cello—with a 52-piece orchestra, which was recorded at Ocean Way in Nashville.

With regard to working in the VR medium, he said, “One thing I would have liked to have done was 3D, spatialized mixes. The stereo was beautiful, but being able to use not just the horizontal mix space but also being able to use the vertical space would have been much more immersive. It also leaves a lot more room for the sound design and dialog.”

Secondly, said Bross, “I would have liked to move away from head-locked mixes. I would have liked to have created a four-channel mix where, when the head moves, the player hears different aspects of the mix.”

Lastly, he said, “These days, we tend to mix with more emphasis on the front. The player in VR can move in any direction.” In the future, it would be interesting to break with that tradition, in a similar way to the music in the Oculus game Farlands, he said.

The final panel of the day offered some discussion on the utility of Ambisonics. It can be really useful for backgrounds and any kind of ambience, said Two Big Ears founder Varun Nair. “It can optimize your workflow and free up your voices. You can always bring in single objects to add detail.”

GameSoundCon
gamesoundcon.com