Dolby atmos listening

So ive got a job lined up where the request is that we can enable dolby vision grading and dolby atmos listing (not mixing).

I am pretty confused by the plethora of options as to how many speakers constitute a Atmos setup, anyone here dealing with that that can give me some pointers as to what consitutes a profesional listening setup for atmos? The room is treated allready acoustically, atmos capable receiver is also part of it but main question is how many speaker do I need and how to arrange those?

My guess would be this:

@digitalbanshee is X an Atmos expert?

I have two friends who are experts in this; one is the Senior Director of the Dolby Institute.

Excited Season 5 GIF by Friends

As I understand it, what makes spacial audio different is that it takes advantage of however many speakers you are using. There is no “correct” amount of speakers.

1 Like

That’s permission for you @finn to go absolutely crazy

Yes, both Dolby Vision and Dolby Atmos make use of meta data and rendering engines in the final delivery to adapt the original material to the specific environment.

Traditionally surround mixes were designating audio to specific speaker channels, whether it’s 5.1 or any other format, and you couldn’t mix and match and get good results. Dolby Atmos divides sound into beds and objects. Object channels are panned in x-y-z 3D space instead of being assigned to specific speaker channels. That 3D data is embedded in the meta data. The playback hardware then knows how many speakers you actually have and maps these objects to the most appropriate speaker combination, thus adapting the audio to the environment. The same file works for a stereo playback or 30+ channel movie theater.

While you can have height channels (adding y axis) in traditional surround it’s most often associated with Dolby Atmos. There are some fascinating demo videos on how that works, and how you can use your phone as a 3D pan controller, and I’ve experimented with that.

Dolby Vision does something similar with the meta data and trim passes, adapting the grade to the specific capabilities of the monitor you’re watching on. That’s Dolby’s schtick - use interesting engineering to encode signals that then requires licensable hardware on the playback side which makes them tons of money. Goes all the way back to Dolby NR in cassette tape recorders. The compressed the signal during recording, and had a playback side matching expander that then pushed the noise floor down in the process. For those old enough to remember cassette tape recorders…

Dolby Atmos has become a lot more accessible with most DAWs including the authoring side of the tools. The playback side is a bit more complicated. There are several file formats and renderers involved.

hahah renee again , she knows everyone :smiley:

But isnt it in the end just a… normal discrete audio file with additional metadata then?

Ok seems like I want to read more about how atmos flows around software then and how I can get that out in realtime to a receiver… , for Dolby Vision I know, its just a PQ signal with metadata that we dont care abou for reviewing the image

I mean if a sound studio makes a dolby atmos mix and i need to merge it with video, listen to it and export it… what do I need?

I auspect i need a piece of software that creates the required metadata and sends it to a tv/receiver like hdmi tunneling? Does notnlook like flame can do this

ELI5 :smiley:

3 Likes

It depends a bit on how they want to play back from Flame. Do they want to hear Dolby Atmos every time you hit start/stop play, or do they just want to listen to a Dolby Atmos version at some point while in the suite.

The scenario I had earlier this year was a director that wanted to listen to the Dolby Atmos mix on is Sonos home theater, and there was no good way of feeding that. In the end I made an .mp4 file for him that was copied on a hard drive which we plugged into his big screen TV that’s connected via ACR to the Sonos receiver. The Samsung TV recognized the .mp4 as having Atmos audio and played it back properly using the decoder in the TV/receiver.

To get there is a multi-step process though. In the DAW (Nuendo) you have to add the Dolby authoring tool on the output bus. There you can define all the beds, object channels, and details, and it maps it to the multi-channel output of the DAW studio connections for playback. From the DAW you can render the .ADM file, which is the Dolby Atmos master (a multi-channel audio file with all the meta data). But this a master file, in itself it will not playback anywhere.

From there you take your video master and this ADM file and take it to AWS MediaConvert and set up an encoding job. Change the audio codec to Dolby Digital Plus JOC and make a new .mp4 file. This will now play back on consumer hardware and decode correctly to available speaker channels.

As you see this process only works with renders, not in a real-time pipeline.

I don’t know if you could do that, extract the audio from this converted .mp4 file and add that to the Flame audio channels and play it back. My suspicion is that the relevant meta data would get lost and the receiver would not recognize it.

There are bigger Atmos setups where you have actual hardware renderers that might be able to setup. But again, I think those are meant for different scenarios.

The best solution may actually be - since the suite has a defined speaker config, and the power of Atmos of translating between speakers doesn’t apply once you’re in one specific playback setting.

Assuming that the material you have to work on comes with the .ADM file, take that ADM file, load it into a DAW (Nuendo or ProTools), setup an Atmos sessions and configure your output to the specific speaker config in your suite. Render this out as a regular multi-channel audio file, and then load that into Flame mapped to the same speaker config. So you use your DAW to decode the original Atmos mix and generate a mezzanine version that Flame can understand and handle. It would be a true representation and should be acceptable. And because now the audio is in Flame, it would play the Atmos mix everytime you hit start/stop properly in real-time.

Assuming that the mix does take advantage of the height channels (you should ask), 7.1.2 would the proper setup that can reflect all the main points of an Atmos mix. If it’s a simpler mix, you might argue that 5.1 is workable, but maybe not entirely representative. You could go further to 9.1.4, primarily for the front and back height differentiation, but that better be one awesome mix to need it.

It is, however it has way more channels in it. Typically you might have 15 channels for the DX, FX, and MX base stems. And then up to 100 extra channels, one per object. So think of it as a 20-50 channel .wav file with some additional meta data. Much bigger file.

2 Likes

Here’s some detail on how the authoring workflow looks like in Nuendo, and has an example of a big screen film with extensive spatial audio, which can see in the panner and Atmos renderer. Fun watch & good German accent:

1 Like

This is the key part - I’ve set the Youtube link to cue at the right part of the video:

image

On the left you see the first three rows which are the beds, the rest are the object channels. On the right you see the 3D panner in top down view. It’s all animated during playback.

1 Like

thanks a bunch, that makes a lot more sense to me now.

I will have to ask for specific specs from clients in how they want me to listen to it, if I send them my speaker layout the sound studio should be able to provide me with a discrete 7.1.2 mixdown as well or whichever, right?

If I understand this correctly its:

ADM file holds hundrets of tracks, basically the stems to the whole soundmix and metadata saying which is where, basically the whole “protools session”, with stuff grouped into objects and whatnot

Atmos renderer(which would be lets say a av/receiver in a home) then takes those “stems” and metadata and looks at the speaker setup (5.1, 7.1, 7.1.6…) and then dynamically creates a mix with discreet audio channels to be send to the speakers. Is that bascially the idea?

I think when they requested that we can playback atmos in the suite what they want is the right hardware to be able to do 7.1.2 or whatever which we can build, they know we arent a sound mixing studio :slight_smile: this is a very long multi episode project (1year+)

I am extremely exited if we get this job as its my ticket out of this social media hellscape that is commercials right now, my braun isnt challenged anymore by the stupid problems that pop up with commercials… I dotn care about tiktok safety margins, I am sorry :rofl::snowman_with_snow:

Basically all the request was “can you playback dolby Atmos” typical producer nonsense

1 Like

Yes, they can make you this render from their master.

Exactly. I think the limit is 125 object tracks at the moment. So it’s not the whole discrete session, but a heck lot more detail than traditional renders contain.

2 Likes

so bascially i cant do anything with the ADM in any live playback situation, so jo way to get resolve/flame to throw out Atmos data to my atmos capable receiver? (like hdmi tunelling in dovi)

Dolby Atmos (and sound design when you get past social media) is super cool and addictive. It’s the Flame equivalent to the audio world.

Sitting here working on a 5.1 mix for 2hr docu which is due Jan 15. It will be my holiday entertainment…

1 Like

Yes, that is a true master file, not intended for playback. I think there are hardware renderers for bigger studio builds that can read it, but then it wouldn’t be synced/linked with your video. That’s when you playback audio and the video channel comes from the DAW.

So getting a mixdown and proper speaker config is your path, and is a genuine solution.

1 Like

sounds good to me :slight_smile:

so Ill go get a nice 7.1.6 capable AV receiver, a bunch of decent speakers, re-arrange my suite and wait for profit

1 Like

If you get the mixdown from the studio, you won’t need a receiver. You just have a bigger audio interface on your Flame workstation that has sufficient output channels (9.1.4 = 14 outputs). Something like a FocusRite or UA Apollo. Then load a 9.1.4 audio track into Flame and make sure the channels are mapped to the right outputs.

Consumer receivers generally use different signals (ADAT vs. AES, etc.), so most of the time studio hardware and consumer hardware doesn’t mix well. Try feeding back the 5.1 audio from your LG on the studio wall into your interface to go back out the same way as your DAW and you’ll loose some hair if you have any left.

That’s when your Flame system’s volume controls look like this (Apollo X8 control software for 5.1 plus ref speaker pair and some other stuff - essentially a full multi-channel mixer):

1 Like

Oh… hmm but if I get a multichannel wav thats just pcm and i can output that via hdmi from decklink card to my receiver? Thats what I do for 5.1 all the time.

I like having the receiver/amp for AV sync and stuff, i also preffer passive speakers.

Not sure where the issue would be with that :thinking: