Film sound is very straightforward and game sound is not. That was the overriding message of GameSoundCon, held for the ninth year— and attracting its biggest attendance yet—in Los Angeles immediately prior to the Game Developers Conference at the beginning of November.
“It’s full of little gotchas and little tics and things that will trip you up,” elaborated Brian Schmidt in his Introduction to Game Audio session. Schmidt, the founder of GameSoundCon, who worked for composer John Williams starting in 1987 before spending 10 years at Microsoft on the nascent Xbox project, is a founding member of the Game Audio Network Guild (GANG) and, in 1999, helped successfully lobby NARAS to make game soundtracks eligible for the Grammy Awards.
“Working on software is a different mindset, a different process, a different flow, than working on music, film, commercials or television,” he added.
In games, there is a big caveat: “Adding something cool necessarily takes away something from some other aspect of the game. It literally is a zero sum game.”
Game consoles and mobile devices have a finite amount of memory and, whatever the storage medium, can only support so many data streams of a certain bandwidth. These restrictions mean that a memory budget is allocated for audio (and the other game components) during the early technical design meetings.
“It’s always too small,” said Schmidt. Consequently, the audio team must decide upfront what sounds must be in RAM for instant access—footsteps and gunshots are examples—and what sounds—typically music, ambiences and off-screen dialog—can stream from the storage medium with some amount of latency.
“Your goal should be to maximize the audio quality while minimizing the impact on other elements of the game,” Schmidt said. Again, unlike film or TV, that typically involves calculating the minimum tolerable sample rate for every single piece of audio as well as the maximum acceptable data compression. If all the sound effects account for 40 MB, but there is only space for 4 MB, you’ll need to apply 10:1 data compression.
“Halo and Halo 2 both shipped their dialog at 22k,” Schmidt reported. “Virtually every sound you hear in a video game is significantly compressed.”
Game duration is a hurdle for composers— console games can be 20 to 40 hours long, and some people play that many hours weekly. “So there’s a lot of content to do,” he observed, “and it’s beyond the stamina of composers and the budgets of the games to have 40 hours of original scoring.”
During the Composer’s Roundtable session, Jason Hayes, who wrote for World of Warcraft, noted that he and the team at Blizzard ended up creating over 50 hours of music. “Open world” games such as WOW are a particular challenge, as players can move freely within the environment without time limits.
For AAA game titles, budgets can be significant, not least because sales revenue now rivals and can even exceed film box office takings. “We had the same production values as a major blockbuster film,” reported composer Jack Wall, who recorded two Call of Duty scores at Abbey Road.
Audio has uses that are unique to games, according to Schmidt. “We need to confirm when a player has done something. We provide game-play hints. We use sound to help people understand the game better. We use sound to extend the playability of the game. We also expand the play area” through surround sound, he said.
“Sound plays, I believe, a larger role in the game media experience than it does in traditional linear media.”