Los Angeles, CA (October 16, 2014)—”This is the time to do new audio,” said David McIntyre of DTS at the start of the “Audio Issues for 4k and 8k Television” panel at the recent AES Convention in Los Angeles. The new high-resolution video formats offer a host of new features, including high dynamic range and high frame rate, “so the audio should go up in quality” also, he argued.
The jump to these latest high-res formats allows audio to also make a break with the past, he said, since the video technology is not backwards compatible. But just because something is new doesn’t necessarily mean it’s better. “Height is cool,” he said of the immersive formats, but perhaps we’ve gone far enough with the number of channels.
“There really should be no lossy data reduction for what you are hearing” going forward, said Thomas Lund of TC Electronics, also advocating for better quality audio. Next-gen broadcast audio needs intrinsic loudness normalization and must be predictable and easy to operate, he said.
Of course, it’s not channels but objects and their associated metadata—the foundation of immersive audio—that we need to think about now. “It’s about to get a lot more complicated,” warned Tim Carroll, Telos Alliance. “It’s a gigantic step forward. I think we have a tremendous responsibility to help broadcasters through this. If we just dump it on the industry, it’s going to go nowhere.”
Program material comprising 100 audio elements and 100 tracks of metadata needs to be made manageable for the broadcast distribution chain, agreed Jeff Reidmiller of Dolby Laboratories. But with mezzanine compression methods, which have been around for a while, it could be reduced sufficiently to be carried over SDI, which is still ubiquitous, and it could be rendered as 5.1 for legacy infrastructures.