Can anyone tell me what the flame equivalent of OCIOs output-sRGB is? Trying to match a nuke workflow that I have some doubts about but I would at least like to be able to replicate it before arguing about it.

Assuming you’re in ACES 1.1 color config for the project, I would assume that this the equivalent:

Screenshot 2024-05-27 at 6.42.41 PM
Screenshot 2024-05-27 at 6.42.54 PM


if you need to match it in your gui, use aces and then set the “gui” monitor setting to sRGB.

bascially flame is way smarter than nuke it does a display management properly, if you set your gui to rec709 it will use output-rec709 etc so that you cna have multiple monitors with multiple colorspaces in the same room.

if you want a argument against output-sRGB, its made to match a sRGB monitor thats next to a rec709 monitor in the same environment its more of a “emulate the look of my rec709 broadcast monitor on this sRGB monitor” its not intended for actual output. If you are in a “sRGB” environment you still want to use rec709 as a ODT or even better somehing liek filmlights “office surround” SDR DRT…

furthermote the whole ODT situation with aces is up for discussion and they are really … bad.


Oh I’m going to read the heck out of that when I have a minute.

I was specifically trying to figure out how to use Output - sRGB as an input transform into ACES, basically jacking up it’s white values so that a lut we got from everyone’s favorite COlor place would output correctly, but we’ve collectively decided it’s a terrible idea and they can just get muddy whites back if they want us to use it.

Basically our issue is we are bringing in sRGB screen content, comping onto ACEScct plates (dumb and not our choice) and then sending them ACEScct shots.

The thing is really annoying me is if I bring the screens in with an input transform to acescct, comp on the acescct plate, and then transform the whole thing to 709 the screens look great, prefect match (white at .9). However if I output the comp as acescct, bring it into resolve and use the aces transform node there from acescct to 709 I get muddy whites (about .6 I think).

I like the idea of ACES but as far as standards go it doesn’t seem very standardized.

ehhh, some setting seems wrong there, i dont quiet follow what is happening :joy: but it sounds like what you want in flame is just inverse view transform, so what jan showed just inverted and to acesCCT


Yeah you’re not following because the whole thing doesn’t make much sense in a couple of different ways. Too much nonsense in too small a paragraph. :slight_smile:


Ah, so there are a few things co-mingled here.

It’s valid to use an inverted ODT as input transfer if you’re feeding material from another color pipeline back through for a second go-around at ACES. In this case an inverted sRGB IDT is a valid choice, though usually you use this more often for other output color spaces.

For sRGB as in screenshots or graphics, there should be other IDTs (i.e. texture) that map them properly.

Separately I’m picking up on “jacking up the whites”. So I’m assuming that some of your material is coming in tone-mapped to 100nits and is not matching the wider dynamic range of your input. In essence you want to undo the tone mapping that was applied.

Unfortunately there isn’t a fixed formula for that, as information was destroyed in the last pipeline. So you will to just have to color grade it to professional taste. No IDT will help with that.

This seems to be a pipeline that is not properly designed from a color management perspective.

debatable :smiley:


In puristic view one would never do that. You would only travel from camera through color pipeline in scene referred and come out to various output transforms - end of story. Also well documented in the IMAGO photon path.

But theory and reality are different words for a reason. People do strange things, and as a result you will take outputs from one color pipeline and feed them back through another one.

If your output was sRGB and Rec709 you’re somewhat in luck, as those are color spaces that have always existed as inputs as well (one for graphics and one for basic & classic cameras). But what if your output was something else - like a DCI-P3? There never was an input transform, because no traditional input device ever existed that would yield such material. DCI-P3 is a delivery color space, not a source color space.

So the inverse ODT is a fix to a problem we didn’t know we would have.

But it’s far from optimal. An ODT is by it’s nature a potentially destructive and thus not function for which a precise inverse exists.

The is a question which I’m not totally clear on the answer for - is a Rec709 inverse ODT identical to the Rec709 IDT?

The argument for yes is, that both take Rec709 material and convert it to ACES working space.

The argument for no is, that the IDT takes Rec709 material and just linearizes the gamma and remaps the color primaries (white point remains). The inverse ODT however should/could attempt to not only do that, but also attempt to invert the tone mapping, even though there is no singular answer for that.

rec709 IDT maps full-signal to 1 Linear its just a pure rec709 to linear, bascially reversing rhe photon path accurately, a “old” broadcast camera would fill sensor and map that to full rec709 signal, its just a 1/1.961

so inverse display is not rec709 IDT.

Rec709 encoding is also 1/1.961 gamma and decoding is 2.4 … so even they arent a inverse of each other :stuck_out_tongue:

Anything tonemapped from a hdr source into rec709 is allready non-ratio-preserving or not scene-reffered while rec709 certanly can be scene reffered.

And yea as you said reversing tonemapping is difficult, especially if you run inverse tonemappings on non-tonemapped material like a screen insert sRGB graphic with a full-white-signal level - it would be mapped extremely high in linear. Then you put a tonemapped sRGB photo into a phooshop graphic element and no what.

Thats the thing, bad color management pipelines are “patched” with inverse display stuff, which really is nothing but a hack, that should pretty much never be used - ever, there are always ways to avoid it

Whats the dumb part? The screen UI not being in acesCCT? Or the fact you have to provide an acesCCT comp back?

I dont think I fully understand your situation… but because its what Im most familiar with at this point, I would do your comp in acesCG. I would use Color Transform nodes set to Input Transform. I wouldnt convert on import or anything. Then for the screen comp part, it works best if you do all of the color matching upstream of that Color Transform on the screen UI and match to a pipe of the plate that has a Color Transform set to rec709. All of the match white, match blacks, etc works best in rec709. If I dont do it this way, say a white UI, will just go blown out and overpower everything else. Then I guess at the very end of your tree, use a Color Tranform to pop back into acesCCT.