The 11th annual GameSoundCon, held in Los Angeles in November, included several presentations by composer Paul Lipson, audio director at Microsoft Game Studios, including sessions on the creation and implementation of interactive music score for a major console game. Los Angeles, CA—“Film sound is relatively straight-forward and game sound is not,” remarked Brian Schmidt, the executive director and creator of GameSoundCon, in his introduction to the recent two-day conference. Now in its eleventh year, GameSoundCon was held November 3 and 4 at the Millennium Biltmore Hotel in Los Angeles and offered sessions, panel discussions and hands-on workshops led by more than 20 of the game industry’s leading composers, sound designers and audio directors.
In a comprehensive global study of the economic value of the cultural and creative industries commissioned by CISAC and published in December 2015, gaming is credited with generating revenues of $99 billion versus $77B from movies in 2013. The gaming figure includes software and hardware; the movie figure includes production, post production and distribution.
Although the video game sector employs a fraction of those working in movies—605,000 versus 2.48 million—it’s an attractive career, as evidenced by the turnout for GameSoundCon. For those prepared to navigate the unique challenges of the industry, Schmidt, whose 28 years in the business include 10 years in charge of the audio technology group for Xbox, had put together a schedule that dug deep into the technical aspects. In addition to separate Game Audio Essentials and Pro tracks, the conference included hands-on workshops extending over both days that led attendees through the intricacies of working with the FMOD or Wwise game engines.
Schmidt began with a brief history of game audio. In the beginning, there was no wavetable synthesis or audio files, he said. Instead, composers had to program their score, typically into less than 500 bytes of memory. PlayStation 1 heralded the switch from cartridges to compact disc, but even 600MB was not big enough to hold the game and a lot of score.
“The big turning point was the PS2 and the original Xbox, when they switched from CD to DVD,” he said. With 70GB available, “That finally became enough room to record 70 minutes of original score, full fidelity, and still have enough room for the game.”
Now, “We have more audio processing power in the Xbox One than in a lot of people’s Pro Tools rigs.” At the same time, he added, things are regressing: “I’ve just finished working on a tablet game where the entire game audio package had to fit into 16MB.”
The mobile market is growing. The CISAC report predicts an 86 percent growth in the consumer video games market for online and mobile video games in 2018, while global smartphone connections are forecast to nearly double to 3.85 billion by 2019. Activision Blizzard recently recognized that growth potential when it acquired King Digital Entertainment, developer of Candy Crush Saga, which generates about 80 percent of its $2.2B revenue from mobile games, in a $5.9B deal said to be the third-largest ever in the video game industry.
Addressing those audio engineers and composers working in games, Schmidt said, “You are an emotional puppeteer. You bring people into a universe.”
Yet there are aesthetic and creative challenges. There is no precognition; there is no way of knowing what will happen next in a game, or when. This requires what Schmidt referred to as “event mapping,” tying sounds to specific cues, triggers or events. Those sounds must be flexible, playing out according to context, therefore requiring layers of multiple audio files with processing. A sword hit might be hard or soft, for example, while footstep sounds will change depending on the surface or size of a space.
Music must be similarly flexible, written to be arranged on the fly as game play progresses. “Even the best music can get tiresome” if it continually loops, said Schmidt, recommending “the magic of silence. You don’t need music all the time.” Lots of redundant assets and processing such as pitching and filtering can add variety, too, he commented.
The technical challenges are many. Resources are shared, and it’s a zero sum game, he said. Audio must therefore be designed to meet the constraints of memory, CPU, data transfer rates and bandwidth—and no two games are alike. “The process is really software development,” said Schmidt. “And there is no post-production phase; you’re developing as [the game] is built.” Because the game is constantly under development until the final version, “You’re working on a broken game,” he added.
Another difference between game audio and movies: “You don’t mix a game,” said Schmidt. Instead, the audio designer provides the game with instructions on when and how to play the audio assets.
A typical bit budget for audio in the range of 8MB to 24MB allows for anywhere from 49 seconds to over two minutes of uncompressed stereo, but that can vary, Schmidt noted. Not every sound has to be the same sample and bit rate as the others, which is one way to save memory. “Data compression is your friend,” he said.