Many of the attendees at AES Conventions are drawn to the exhibition floor, but the conference is much more than a gear-fest, also offering a fourday program of themed session tracks, workshops, tutorials, master classes, paper presentations, technical tours and special events. Even a cursory glance at this year’s program schedule suggests that attention is being focused on a number of topics that touch multiple segments of the audio business, including that perennial favorite, loudness, as well as mobile platforms, the cloud and immersive sound.
Over recent years, it seemed as though everyone in broadcast was talking about loudness in the run-up to the CALM Act—which legislates television program-to-interstitial loudness—and its subsequent implementation. The Broadcast and Streaming sessions program, chaired for the twenty-seventh year by the redoubtable David Bialik, will once again assemble a panel of experts to discuss the global state of TV loudness and associated metadata (Oct. 17, 9 a.m.). But this year’s program also looks at loudness issues in radio and, as content is increasingly made available online, streaming, pondering the efficacy of loudness controls in those environments and discussing what solutions are being applied worldwide (Oct. 17, 3:45 p.m.).
The “loudness war” started in the record business 20 years ago as mastering engineers were encouraged by clients to crank the levels in order to have their music stand out on the radio. With FM radio in Europe now adopting EBU R128 loudness normalization, an all-star panel (Oct. 19, 5 p.m.) will analyze recorded music fidelity of the past 50 years. As the program notes state: “In the new realm, it’s futile to master music louder than –16 LKFS.”
Video games are not immune from wildly varying loudness, either. Garry Taylor, audio director, Sony Computer Entertainment in the UK, will be presenting the loudness recommendations recently released by Sony’s Audio Standards Working Group (Oct. 18, 11:45 a.m.).
The game industry has made a concerted effort to bring standardization to its online products, applying new techniques and developments available in the Web Audio API. A Game Audio track session panel (Oct. 19, 11:30 a.m.) will discuss emerging standards in audio processing, event-driven playback and synthesis within the browser.
As smart phones and tablets must increasingly handle both primary and “second screen” content, and more and more car models offer internet connectivity, several panels will explore the role that established and emerging technologies will play in mobile content delivery. The Broadcast/Streaming track is offering two related sessions—one on audio for mobile TV (Oct. 18, 9 a.m.) and another entitled “Content Delivery and the Mobile Initiative” (Oct. 18, 3:45 p.m.).
The mp3 codec revolutionized the interchange of music files over the internet when it was introduced, but 25 years later, with bandwidth almost unrestricted, has it outlasted its usefulness? A Broadcast/Streaming panel, “Is it Time to Retire the MP3 Protocol for Streaming?” (Oct. 17, 5:30 p.m.), will consider the challenges related to introducing new codecs.
While the term has come to mean different things to different people, the cloud as a collaborative platform has a significant attraction to the audio industry. The Project Studio Expo track offers practical examples and scenarios in a session, “How to Create, Produce and Distribute Your Music Completely in the Cloud” (Oct. 19, 2 p.m.), while a workshop, “The Cloud-Connected Future of Media Creation” (Oct. 19, 12:30 p.m.), will explore the quantitative and qualitative trends and the obstacles to media creation in the cloud.
As the moving visual image accelerates into a 4K future, how will audio keep pace? A Broadcast/Streaming panel (Oct. 17, 10:30 a.m.) will attempt to predict the future of television sound.
One potential path for audio in a 4K landscape (with 8K on the horizon) is so-called 3D audio, featuring additional height and other channels. A workshop on height channels (Oct. 17, 12 p.m.) will tackle issues related to the vertical dimension before moving to NYU for playback sessions.
A pair of tutorials will also explore immersive audio. “Auro 3D— Discovery of the Ceiling for Stereo and Surround” (Oct. 17, 11:15 a.m.) will present recordings of classical music made in a variety of historic concert halls and ancient churches in the immersive format. “3D Audio— Experience the Sound of the Future” (Oct. 10:30 a.m.) considers the potential for immersive sound to migrate from the cinema to the home and even mobile platforms.
Tying many of the foregoing topics together, the DTV Audio Group will present a four-hour special event, “Audio Production and Distribution in an Evolving Program Delivery Landscape.” The forum, open to all attendees, poses a provocative question: will the transition to mobile platforms spawn universal delivery standards and workflow practices or simply lead to the dumbing down of TV audio for streaming? Object-oriented immersive audio schemes and the implications of shrinking white spaces will also be discussed.