Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


HPA Focuses On Evolving Market And Workflow

HOLLYWOOD, CA—The annual SMPTE Technical Conference and Exhibition came to Hollywood in late October, and for the first time featured a pre-event symposium hosted by the Hollywood Post Alliance.

HOLLYWOOD, CA—The annual SMPTE Technical Conference and Exhibition came to Hollywood in late October, and for the first time featured a pre-event symposium hosted by the Hollywood Post Alliance. As noted by Leon Silverman, HPA’s president, “Technology, filmmaker options and consumer choices are driving us to learn how the Hollywood professional community can better understand how to evolve our current workflows to meet the demands and characteristics of next-generation content and delivery platforms.”

The keynote address, by Chris Fetner, director of global content partners operations, Netflix, put the spotlight on some of those future delivery platforms. As it turns out, the future is now: Fetner noted that Netflix was first to market with the ultra high-definition experience with House of Cards, is also moving toward HDR (high dynamic range) and greater frame rate pictures, and has adopted IMF, the Interoperable Master Format standard developed by SMPTE, as a common platform for content owners.

IMF is the answer to “versionitis,” as he described it. Netflix currently receives eight different file types, he noted, leading to a high rate of failed deliveries due to mismatched assets.

IMF enables extras such as alternate languages and subtitles to travel with the content. Netflix hopes to make the format its sole deliverable by 2016. But “IMF is an immediate thing,” said Fetner, revealing that upcoming original miniseries Marco Polo will be the first content delivered to Netflix in the IMF 2+ extended format.

Jan Eveleens of Axon in the Netherlands brought attendees up to date with the latest IEEE Ethernet AVB networking developments. AVB’s benefits, in the form of its time synchronization, bandwidth reservation, traffic shaping and configuration abilities, are being touted as ideal for next-gen live broadcast infrastructures; indeed, it’s the backbone of ESPN’s new facility in Connecticut. Development of AVB’s second generation format, also known as Time Sensitive Networking (TSN) is being driven by the more time-sensitive demands of the automotive and industrial automation markets, said Eveleens.

“The standard does not guarantee interoperability,” he noted, hence the establishment of the AVnu Alliance, which currently has 80 members and is growing 20 percent year-on-year, he said. As for the “V” in AVB, professional video certification will begin Q2 2015, said Eveleens, who is chair of AVnu’s pro video working group. The standard will reportedly be ratified in Q1 2015. The aim is also to have layer 2 and layer 3 protocols in the AVB network standard by the end of 2015 or early 2016, he said.

Object-based audio has become a hot topic of late. Charles Robinson of Dolby Laboratories presented the results of tracking object placement and dynamic moves made by various motion picture re-recording mixers using data from the Dolby Atmos system. The results suggested that certain common panning tracks—screen bottom to center; up and over the audience— are perhaps guided by available automation tools, he observed.

We are in the early stages of a paradigm shift for the consumer experience delivered via broadcast, noted Dolby’s Jeff Riedmiller, offering a presentation on immersive and personalized next-generation audio. Dolby’s object-based Atmos immersive audio system, available in movie theaters for a couple of years, has recently started to migrate into the home and is poised to jump to mobile platforms.

Despite the Atmos bitstream carrying a tremendous amount of data associated with its potential 128 objects, there are methods for compressing or generally streamlining that load, said Riedmiller. For example, he suggested, “You can take advantage of the fact that not everything is happening on all tracks” and use lossless compression.

Substreams can carry objects or groups of objects—alternate languages, or commentary—that are then combined into a presentation. Spatial coding, too, is a way to simplify, he said, rendering those 128 objects into 15 spatial groups, or 15 outputs. The result would be a set of program building blocks that could be variously combined.

Dolby Labs is not the only player in the immersive audio field, of course, but supports any move toward an open standard, according to Riedmiller, in order to deliver a nextgen experience to consumers. “The only way we can make it work is all together,” he said.

Fraunhofer is offering its MPEGH encoding scheme as a potential solution to broadcast delivery of immersive audio. According to Fraunhofer’s Robert Bleidt, tests have shown that just four height speakers can create a more realistic surround sound. A high-end immersive playback system might only be within the grasp of one percent of the viewing audience, but Fraunhofer’s 3D soundbar potentially could deliver the experience to a wider audience, he said.

As for how broadcasters might make transition to MPEG-H, Bleidt laid out a four-point roadmap. First, replace AC-3 encoders with MPEGH encoders simultaneously with the implementation of HEVC or SHVC picture encoders. Next, add audio objects such as alternate commentary or additional dialog tracks, either in a channel-plus-objects or Higher Order Ambisonics-plus-objects format. Then, add-in the four height channels. This would require additional channels through the plant, he said. Lastly, add dynamic objects; this would require the transmission of control data through the TV plant.

For those wondering how immersive audio could be monitored, he suggested several alternatives. Simply use the existing speakers and add four height speakers, use a suitably equipped remote studio or use headphones with a personalized HRTF profile, he said.