It depends a bit on how they want to play back from Flame. Do they want to hear Dolby Atmos every time you hit start/stop play, or do they just want to listen to a Dolby Atmos version at some point while in the suite.
The scenario I had earlier this year was a director that wanted to listen to the Dolby Atmos mix on is Sonos home theater, and there was no good way of feeding that. In the end I made an .mp4 file for him that was copied on a hard drive which we plugged into his big screen TV that’s connected via ACR to the Sonos receiver. The Samsung TV recognized the .mp4 as having Atmos audio and played it back properly using the decoder in the TV/receiver.
To get there is a multi-step process though. In the DAW (Nuendo) you have to add the Dolby authoring tool on the output bus. There you can define all the beds, object channels, and details, and it maps it to the multi-channel output of the DAW studio connections for playback. From the DAW you can render the .ADM file, which is the Dolby Atmos master (a multi-channel audio file with all the meta data). But this a master file, in itself it will not playback anywhere.
From there you take your video master and this ADM file and take it to AWS MediaConvert and set up an encoding job. Change the audio codec to Dolby Digital Plus JOC and make a new .mp4 file. This will now play back on consumer hardware and decode correctly to available speaker channels.
As you see this process only works with renders, not in a real-time pipeline.
I don’t know if you could do that, extract the audio from this converted .mp4 file and add that to the Flame audio channels and play it back. My suspicion is that the relevant meta data would get lost and the receiver would not recognize it.
There are bigger Atmos setups where you have actual hardware renderers that might be able to setup. But again, I think those are meant for different scenarios.
The best solution may actually be - since the suite has a defined speaker config, and the power of Atmos of translating between speakers doesn’t apply once you’re in one specific playback setting.
Assuming that the material you have to work on comes with the .ADM file, take that ADM file, load it into a DAW (Nuendo or ProTools), setup an Atmos sessions and configure your output to the specific speaker config in your suite. Render this out as a regular multi-channel audio file, and then load that into Flame mapped to the same speaker config. So you use your DAW to decode the original Atmos mix and generate a mezzanine version that Flame can understand and handle. It would be a true representation and should be acceptable. And because now the audio is in Flame, it would play the Atmos mix everytime you hit start/stop properly in real-time.
Assuming that the mix does take advantage of the height channels (you should ask), 7.1.2 would the proper setup that can reflect all the main points of an Atmos mix. If it’s a simpler mix, you might argue that 5.1 is workable, but maybe not entirely representative. You could go further to 9.1.4, primarily for the front and back height differentiation, but that better be one awesome mix to need it.
It is, however it has way more channels in it. Typically you might have 15 channels for the DX, FX, and MX base stems. And then up to 100 extra channels, one per object. So think of it as a 20-50 channel .wav file with some additional meta data. Much bigger file.