Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Immersive Sound: New Tools Of The Trade [CONTINUED]

The April, 2015 issue of Pro Sound News features a special report, Immersive Sound: New Tools Of The Trade (which can be read HERE). Expanding on the report, we present further insight in this online continuation.

The April, 2015 issue of Pro Sound News features a special report, Immersive Sound: New Tools Of The Trade (which can be read HERE). Expanding on the report, we present further insight in this online continuation.

Immersive audio formats add a vertical component—the Z axis—that must be addressed by the mixing console, host DAW or third party hardware or software for panning and positioning. According to Crockett, mixers don’t necessarily need to alter their workflow and may use a variety of tools when working with Dolby Atmos. “Some prefer using console controls which can be mapped to the Dolby Atmos Panner, with the automation data captured either by the Pro Tools system or in the mixing console’s automation engine,” he says. “Others prefer to work ‘in the box,’ directly manipulating the Pro Tools software and Dolby Atmos panner via mouse control.”

Not surprisingly, considering the massive budgets behind major motion pictures, console and control surface manufacturers as well as various third parties have quickly developed solutions for the new and emerging formats. “We have seen an extremely positive response from DAW and console manufacturers with several key partners already implementing direct Dolby Atmos integration,” he says. “The AMS-Neve DFC, Harrison MPC5, Avid System 5 and Fairlight 3DAW system can all communicate directly with the RMU, manage audio objects and generate Dolby Atmos metadata; in addition they all offer 9.1 channel based panning.

“We are also seeing interest from plug-in manufacturers, such as Exponential Audio, who have developed their 3D Link technology to allow for 9.1 reverbs even in workstations that don’t currently support XYZ channel configurations. In addition, due to the Avid AAX plug-in SDK we are able to offer Pro Tools, ICON and S6 control surface integration through the Dolby Atmos Panner plug-in,” says Crockett.

The emergence of new tools, especially those not tied to the large mixing surfaces, is beginning to allow the positioning of channels and objects in the immersive soundfield to move further upstream in the post production process, he says. “We’ve seen several consumer Dolby Atmos mixes, especially re-mixes from channel-based theatrical mixes, prepped in traditional 7.1 rooms, with object tracks edited and automation roughed in, and then final mixes performed on a proper consumer Dolby Atmos dub stage.”

Indeed, there are already six Dolby Atmos titles available on Blu-ray Disc, with three more coming during the month of March: The Hunger Games: Mockingjay – Part 1 from Lionsgate, Unbroken, from Universal, and Gravity, from Warner Bros.

As Slack details, DTS supports a variety of panning methods with its MDA format. “MDACreator maps to any Pro Tools console, and can be mapped to a number of joystick options. As a Pro Tools plug-in, a mouse or touchpad can also be used. For those who wish to edit the automation, these parameters follow the same conventions as the built-in Pro Tools panner. This feature also makes it quite easy to copy the automation from an existing Pro Tools mix to MDACreator.”

MDA Creator was designed to work within the existing structure of Pro Tools, continues Slack, providing an intuitive interface for 3D panning in a 2D environment. “The user has the choice of working in a 2D soundfield, or having the 3D pan laws automatically derived from the 2D position. The DTS MDACreator is an AAX Pro Tools plug-in for Mac and Windows, but Fairlight also supports the MDA format within their DAW. We are currently working with other developers to support MDA on a number of different platforms.”

While the DTS-HD Master Audio codec is familiar to anyone who has watched a movie on Blu-ray Disc, the company’s MDA format has not yet been heard on any theatrical releases. But as Slack reports, “We have had a great deal of success implementing MDACreator on mix stages with existing immersive sounds systems of different types. MDA is not a prescriptive system, so the exact speaker configuration is up to the client. This being said, both 7.1 and 11.1—7.1+4—are both popular configurations for edit bays and smaller mix stages.”

Auro has implemented panning in its software. “We currently have our tools set up in a way that we stole from AMS Neve; if you look at the DFC you’ll see they have two joysticks,” says Mevissen. “The left joystick, when working in Auro, is doing the panning on the X/Y axes. The other joystick just does the height.” The plug-in graphic displays two windows, one as if looking down on the mixer from above, for X/Y positioning, and the other showing height, as if from behind the mix position. “I do see that some 3D visualizations might look nice, but if you have 200 plug-ins you have to work at the same time it gets convoluted,” he also comments.

Mevissen reports that the Leap Motion Controller, which enables hand gesture input, is also implemented in Auro. But, he notes, such a device might be cumbersome if a large track count is in play: “Since we work a lot in post production on these huge stages with sessions that have hundreds and hundreds of tracks I’m not sure that’s the best way to do it. We also have the idea of using a laser pointer, for instance, and you can point to a place in the room; the pan will follow that point. But if you look at these re-recording mixers, they’re so fast they don’t want to grab a laser pointer to do something.”

As far as mixing for MPEG-H Audio, says Bleidt, there is no need for special tools or major workflow changes. For an interactive broadcast, as mentioned previously, enabling the consumer to alter the relative levels of one or more audio elements, the mixer can continue to work as he or she does today, simply generating mix-minus feeds of the relevant elements as direct outputs.

But how would that workflow be applied in a broadcast remote truck? As Bleidt has noted, Fraunhofer is recommending a multi-stage path via interactive to full immersive production, making use of existing workflows and infrastructures initially.

“Let’s say I have a 5.1 bed; you break out the commentary and you break out the sound effects,” says Bleidt. “If you have fiber going from the remote to the network operations center, just put it on those spare PCM channels. If you don’t and you’re on satellite, or for some reason the terminal equipment for your fiber is full, then you can encode it with a local MPEG-H encoder at contribution bitrates, say 1.5 MB, and you send it over an IP.”

For broadcast, at least, it seems unlikely that consumers will be outfitting home theaters with 64 speakers any time soon, so initial upgrades will likely be constrained to something a little more practicable. “The system that you will find most of the companies developing new audio systems are proposing for immersive sound production for TV is the 7.1 Blu-ray surround sound configuration and then four speakers in the top layer,” comments Bleidt. “That’s a popular configuration.” The first trials will likely be with 5.1+4H, since most remote trucks and broadcast consoles have a maximum buss size of 5.1, he adds.

“The MPEG-H system is quite flexible,” he says, and supports a variety of speaker layouts. “You could use 5.1+4H, or other configurations.”

Even a 5.1-capable remote truck can handle immersive projects, according to Bleidt. “For the most part you’re limited to 5.1 panners, but you can still make this work. What you need to do is put the height microphones on a separate buss and feed them out separately. I suggest that you use caution with static vertical panning, positioning sound sources between the middle and the top layers. It doesn’t work like horizontal panning; your ear hears things differently,” he cautions.

“But you can do a very satisfactory mix by separating your microphones into a mid layer and a height layer. That’s what we’ve done for several shows and it’s a good way to get started with 3D sound. I don’t want to minimize the fact that some day you might want to use objects, some day you might want to get a new console with 3D panning built in, but you can use today’s consoles.”

Fraunhofer will show other techniques for panning audio objects in MPEG-H at its booth at the Nab Show in Las Vegas, NV in April. The company will also reportedly be making further announcements about how to use existing consoles within the MPEG-H workflow.

As for monitoring, he says, there are three options. “You can put small speakers in the remote truck,” he suggests, noting that Fraunhofer has put 3D sound into car headliners. In fact, JBL recently introduced its new passive, compact and lightweight 7 Series for applications such as remote trucks.

Alternatively, says Bleidt, “You do the immersive mix at the NOC or some other studio, as they did at the World Cup, or you use real immersive headphones—and I mean like a Smyth Realiser. For confidence monitoring you can use binaural rendering.”

Just as the popularity of the 5.1 configuration led to the emergence of upmixing hardware and software, Auro has also made immersive upmixing available, says Mevissen, via the Auro-Matic Pro plug-in, which comes in both 2D and 3D versions. “In the beginning we were focused on the 3D upmixing algorithm. We got some nice feedback from people using it in 5.1 only, so we decided to develop a mono/stereo-to-5.1 version as well. It’s a combination of an algorithm and some three-dimensional impulse responses, very precise impulse responses of places that we recorded ourselves. It works completely differently than Penteo, for instance.”

The company has also recently announced collaborations that enable tight collaboration with Fairlight DAWs and Merging Technologies’ Pyramix application. “The idea is that both of them will implement their own Auro mixing. They have their own panning; we offer our tools for the authoring part. We have an encoder API and they create an export path that uses our codec technology to create the delivery format. We also have VST-based decoder plug-ins that you can use to QC and decode the content.”

In fact, Fairlight has been showing off its 3DAW (3D Audio Workspace) software for a couple of years. Available as a turnkey hardware and software solution for Fairlight, Pro Tools and Nuendo workstations, it allows multiple surround output formats to be generated from the same mix—a major cost-saver, according to the company. Fairlight also offers a device, AirPan, which permits input of panning commands via virtual reality-like hand gestures.

According to Tino Fibaek, Fairlight’s CTO, “Fairlight has deliberately positioned 3DAW as an agnostic 3D audio production platform. It provides the core immersive mixing and monitoring functions, supports ASIO and VST compliant DAWs, and codec technologies from Auro, Dolby and DTS. It is a new world, but with Fairlight’s 3DAW, you don’t have to change your existing 2D tools and workflows to produce in multiple immersive formats.”

One key to enabling more people to participate in projects produced in these new immersive formats is headphone virtualization. “Dolby Atmos is already available on the Fire HDX 8.9 over headphones,” notes Crockett. “Dolby’s overall immersive headphone philosophy is to send the entire Dolby Atmos stream all the way to the consumer device, and then the device will decode the stream in the best way for that particular device. We feel that will achieve a superior experience over a pre-baked one-size-fits-all headphone rendering solution.

MDACreator does not incude headphone virtualization, says Slack, “But it integrates nicely with DTS Headphone:X. The output from MDACreator can be bussed directly into the Headphone:X encoder for realtime monitoring on portable Pro Tools systems. This makes for a very powerful work environment for edit bays, and even laptops in the field or on the mix stage.”

 Auro, too, has released a headphone plug-in, says Mevissen: Auro-Headphones. “If you work on your laptop, for instance, and you want to prepare something for mix or editorial work, you can hook up the headphone plug-in to listen to your 11.1 mix. It gives you a good spatial impression of what’s happening and you don’t have to change your routing. You can have your session set up like it is in your studio and transfer it to your laptop, plug in your headphones and listen to the same mix.

“We are working with a set of HTRFs that basically represent a broad range of people, to get something that works for everybody. It’s not a perfect solution, but it works really well for us and we’ve been getting very good feedback.”

Although Auro considers it a monitoring tool, Mevissen continues, “You can output the stereo mix to a regular track, record it and get a binaural rendered track.” Although details are still under wraps, he adds, the technology has applications in the mobile device market.

Close