Mixed Color Spaces: Invert ODTs or not?

Hi Everyone,

We know how and why (to a certain degree) inverting ODTs can be problematic in some situations, but we’ve been able to take advantage of it in about 90% of our high end tv ads projects in the past (many) years.

After talking with a few color specialists, engineers, artists (…) it seems that none of us could find the ideal solution to the problem explained below.
I know there are a few color science experts (and scientists) around here, including @doug-walker (Hi Doug!) so I’m hoping this could be a chance to nail this one down for good :slight_smile:
Dear experts friends, please chime in!

Mixed Color Spaces Problem

(display referred vs Scene referred)



Mix sRGB or bt1886/rec709 with ACEScg and Camera Log sources, comp, render and deliver in ACEScg (client’s specs), without affecting any colors in any source (at least visually).


Given Color Management policy/config

Syncolor (ACES1.1) in Flame 2023.3 and OCIO v2 vfx config (ACES1.3 ocio v2.1) in Nuke v14 and Maya/VRay.



Do not use any inverted view/display (Syncolor/ocio) transform.



If you convert the sRGB texture to ACEScg using an input transform (Flame), you’ll need to look at it under a gamma corrected display (untoned map in OCIO).

If you convert a camera log file to ACEScg, or receive material in ACEScg color space, you’ll probably need to look at it under an ACES 1.0 SDR display transform.

How do you comp together 2 things that are meant to be viewed differently?



  1. Is there a mathematically correct way to achieve this using ACES color science?

  2. If not, is our only option to indeed invert ODTs to convert display referred material when the visual output ‘looks’ correct and reacts nicely to additional post processing (grading …)?

1 Like

As we talked directly,

I think the crux will always be mixing HDR(or scene reffered) and SDR(display reffered) , you just really cant without a compromise.

I still believe the best option for those cases is to bring the hdr sources (log/aces/linear whatever) into SDR first by grading them and then merging them with your SDR sources, as we do with client logos in aces timelines etc.

Its 2 different worlds really. in reality filming SDR stuff with a camera would also make it look different to the source, think of the following experiment (something I would love to do for a tutorial on srgb-texture in aces as thats quiet interesting)

You create a artwork on your computer, in photoshop on some kind of display, you print it out, using the most perfect calibrated printer and display possible.

Now you take this printed image, you take a camera and film this picture on your table, you take the footage from the camera and load it back into your computer, does it match the source image you created digitally? No? Exactly! The printed image has almost no dynamic range depending on the light that is hitting it and the paper it was printed on.

But when you use a scanner to scan the image you should get the same result as your digital source image, the scanner is made to capture Low-Dynamic-range content, so it looks the same, you do the exact same as when you go sRGB->linear->sRGB without tonemapping. just in hardware :slight_smile:

So its really 2 differnt worlds and its hard to explain if you dont have HDR monitors where this whole thing becomes pretty obvious.

So its either expanding th SDR source to be HDR which we do with a inverse view transform which leads to artifacts and banding and colorshifts and is generally bad.

Or take the HDR sorce down to SDR which we do by grading it.


Thanks for chiming in @finnjaeger!

So yeah, no way to not invert an ODT if we don’t want to bring Scene referred sources down to display referred and back up (ACEScg for example), thus downgrading. And if your client explicitly says that colors must match 100%, grading the untonedmap to “match” might not be an option.

Different issue but similar explanation about scene or display referred concepts there (from @toodee Daniel Brylka)

I deal with this on a regular basis. Usual procedure is invert ODT and deal with the artifacts.

1 Like

haha! Yeah, we do that all the time in Flame, but CG leads and/or TDs don’t seem to like tonemaping, inverted or not … so … looking for impossible alternatives. Thanks for chiming in @milanesa.

What do the use to “see” their renders? Not ODT?

The “the colors must match” demands are either an easy solution because they’re overlaid graphics, or an impossible one because any graphic or object photographs differently in different light.

Lord knows I’ve tried to explain this to clients.

I’m going to start calling it “diagetic lighting” and hope that the fancy term will win them over.


A bit of a side comment - I don’t think the definition of HDR = scene referred and SDR = display referred is correct. It happens to come out that way by coincidence. It’s a case of correlated is different than causated.

HDR = high dynamic range / SDR = standard dynamic range (commonly understood as <= 100 nits vs. > 100 nits on displays, though this likely is another correlation not an absolute definition).
Scene referred = all color is device independent in some defined theoretical color space
Display referred = all color is encoded to match a specific display device color space

In reality, if you want to eventually convert scene referred material to a full range of displays, including HDR and SDR monitors, you logically want to use a scene referred color space large enough to cover both, and as such a HDR type color space, hence the correlation.

Technically even Rec 709 could be considered as scene referred, if from there you convert it to Rec 709 displays, sRGB displays, or JP 9600K whitepoint display. What makes it display referred is when you deliver it for a known display.


On the original question -

One the issues comes down to whether all material is actually converted to a working color space, or just tagged and processed through viewing rules. Different systems handle this differently.

If all materials are converted into a common color space, and you have an ODT that maps this working color space to your display, then all the math should be straight forward (there is a special case around alpha/mattes), and it should all look right on the same monitor regardless of material.

It creates a non-mathematical problem though - when you do this conversion, there used to be a common understanding what pure white is, which is a key reference color in graphic design. In SDR pure white was 255,255,255. In a scene referred image, pure white also exists, however once it it’s mapped to the display color space it may be way to bright from viewer experience. Therefore a new definition has to be found at what physical nit value you assign the GFX pure white so get a pleasing experience. The end result of that being that there is no ‘without affecting color’ since there no longer is a singular answer.

The same is true with fore example a candle. If you get an sRGB asset of a candle flame, is there a feasible notion of ‘without affecting color’? Does that mean the intent is maintained, or does it mean the specific color values are maintained? If you imagined the original candle flame, if it were filmed with HDR capture would have colors and brightness that doesn’t exist in sRGB. So should the sRGB flame be mapped in a way to returns it to it’s approximate actual look in the context of the scene referred color, or should it be kept at it’s sRGB mapping which will likely make it dull looking in comparison to it’s context.

Which goes to @andy_dill point - the notion of ‘do not touch the color’ or ‘the colors must match’ is no longer a logical concept that can be applied. It made sense when we all remained in a more uniform but confining space of filming, finishing, and watching in Rec709.

The fascinating thing is that for the first time we actually have the entire picture pipeline from capture to reproduction in a dynamic range that equals our eye sight. That’s a fantastic advance that it is worth sacrificing some outdated notions for.

In color pipelines that don’t actually convert material to the working color space, but operate in a mixed environment and then just apply viewing rules at the end, you can land in a place where the math doesn’t work, because you’re mixing disparate color mappings in a single image, that now becomes a color salad that a viewing rule no longer applies to.


On the Maya/VRay side, they tend to want to use untonedmap display in the ocio v2 (gamma corrected in Flame). And I understand why.
On the Flame side, it’s ACES 1.0 SDR unless where specific content is better treated without using an inverted ODT, and look at things in context views, later in the branch, after the inverted transform for that specific element has been inserted.
In the end, we’re a team, and our CG friends do use aces 1.0 SDR in most cases, but things have happened and might happen again (and/or to other teams).

Yea totally didnt want to equalize them just saying yhat the same applies to HDR and SDR as it does when we are talking graded display reffered vs scene reffered camera originals.

Totally can be confusing , you can also have display reffered HDR :smiley:

@andy_dill & @allklier,
Yes this is always a good reminder, thanks for chiming in!

I should have given more context though.
There’s the beautiful theory, and maybe we should even be talking about materials and environments physical properties instead of color values, but I think we can agree on some general concepts and/or approaches.

To clarify the context:

  • On the scene referred side, if doing cleanup, or a set extension on an ungraded plate for a feature film, you know that you will have to deliver something where you haven’t altered any pixel value outside of your work. So the color will be a 100% match in and out in your workflow.
  • On the display referred side, let’s just assume that a client will look at things from the same device when comparing his material and your renders. If your work consist of just adding a poster on a wall in a neutral light with no perspective (flat on facing), your very branding oriented client will want your posting of a render and comp of his logo to look ‘as usual’, and sometimes “exactly” as usual.
    I think we can agree that in this case, the client is in his good rights to expect … a color match.
    If you had to do it with bt1886/rec709 only sources, this would be a no brainer, right?

I would be interested to see how close a 3d lut could mimic the difference between the 2 transforms (tonemap or not). I’m planing on looking at this more tomorrow. The goal being to not go down to intermediate video that happens when applying the down/up hack. I’ll keep you guys posted.

And yes, we’ll keep inverting ODTs everyday whenever it makes sense, I’m a big fan of it :slight_smile:

I dont think thats a very reasonable request to be honest - having a poster in a real scene match the exact look after the odt with what the graphic designer did, it wont “look real” as it wouldnt match the scene luminance and the reflectivity of said poster, having a screen insert → sure because its emissive it could be anything and totally different to the scene… But I mean sure creative choice and everything :man_shrugging:

The values would need to be out of this world wich inverting the odt is creating, and then you cant forget that there is a grading step afterwards.

Grading bascially cant touch anything then and has to use the same odt that you have inverted - if the graphic is supposed to be “as is” in the very end , then what are you gaining from comping in that element onto the scene reffered plate vs doing it after grading?

Dont get hung up on the different viewing processes , untonemapped , aces SDR and stuff that doesnt matter what matters are the underlying values, they all need to correlate and make sense, a comp has to look good no matter what viewing transform in whatever colorspace is used, if you ivert a odt you are putting yourself in a corner having to use the same odt you inverted to make things not break or shift.

I actually - almost never invert ODTs, it creates too many issues and color shifts, I comp logos and graphics on in display reffered space only, after grading, if I get elements to put into my shots like a screen replacement I use srgb-texture as my IDT and just exposure it up in linear space to fit the screen luminance of the scene.

Reminds me of a fast food commercial we did where we had to put 2D photographs of burgers in a 3D environment - back then I knew nothing of aces and stuff and cg decided to use “filmic blender” as the main ocio config and we in comp had the biggest issues as we couldnt even invert that ODT - it turned out… not great, should have done it the other way, ended up doing a lot of stupid fixes in grading instead of just doing it the proper way.


In reference to what? Your broshure? Your Iphone? Your Super 8 Kodak stock? That billboard? Ah I know… this JPEG the Marketing department sent. Been there… over and over again.

I also like Invert ODTs BTW.

1 Like

1 - Doing clean-up

Yes, all should remain pixel accurate, but presumably you are not importing foreign content into this plate, so there shouldn’t be any color space conflicts. Care has to be taken at every step not to introduce color changes by adverse colorspace operations, and any operations that are unavoidable must have inverses in place.

2 - Comping in material

It’s reasonable for the client to look at the before and after and expect a ‘perceptual match’ of his material. Not necessarily a pixel value match. That perceptual match will potentially be a different grade in an HDR master than in SDR master. It’s up to the compositor and colorist to achieve this perceptual match.

In the case of product, yes branded product colors and Pantone chips should be accurate between grade and the Pantone chip being displayed natively on the same display (which is a whole story of its own), and the physical product under appropriate lighting conditions.

That’s exactly where brining in sRGB or graphic textures gets tricky, because they have to be color space transformed so they behave perceptually accurate in the target color spaces. Whether that is by converting them into a unified working color space with appropriate ODTs or by mapping them to the appropriate ODT viewing rules without a roundtrip through a working color space.

Not sure where this is going …
Basically the conclusion would be that my question is not valid because the situation I try to describe doesn’t exist, clients are total idiots, and presumably I (and my team) don’t understand the theory of lights, representing colors, capturing devices, color spaces, luts and such … ok :slight_smile:
I thought I might be missing something, but not that much :upside_down_face:

I don’t see it that negatively… Apologies if I came across that way…

In some ways we tried to frame the problem and the theory of it, as this grounds the conversation. Then comes the practice. And the practice may be different in different apps (Flame, Nuke, Resolve, etc.), though your question was Flame specific.

In your original statement - you specified the goal as ‘without affecting colors in any source (at least visually)’, which is the same we talked about perceptual matching.

Your problem statement says that you would need to view it under different circumstances in that comp. We haven’t answered that question directly, but the workflow is not quite right if that were to happen, because the final product must be viewed in a single fashion based on the viewing transform from the working color space. There may be different ways of viewing it for different destination displays, but that’s a separate matter. So yes, something maybe off in your described workflow.

Your question #1: Can this be done mathematically in ACES? The answer should be yes. That was one of the reasons ACES was created for, and people do it in Nuke and Resolve all the time. I’m certain it can also be done in Flame, but I’m not as well versed there yet as I’m only a year into my Flame journey, after many more years elsewhere.

Maybe @finnjaeger and others can add to this so you get a more actionable answer.

1 Like

Sounds like it’s a frustrating situation.

Sadly, there are only 2 options available for your situation.

  1. Bad color, good math = Input Transforms
  2. Good color, bad math = Inverted View Transform

And, if you are working on any other project and owned the entirety of the pipeline, you’d probably be able to:

  1. Comp graphics on top of Rec.

Good luck.


I think this may require some more investigation. I can run this by some folks that I’m connected with that have deeper knowledge.

Logically an input transform is the right math. But if it’s not giving you the right results either this specific transform has a defect or is not the right one to use.

Seems like the fix is to find the transform that does the right math and also gives the expected result. Not to invert something or hack it otherwise.

Would it be possible to get setup that demonstrates the problem? You can DM if it can’t be shared on the forum.

1 Like

By the way, @Stefan, nobody thinks you, your team, your studio, or your clients are dumb. While we haven’t met, rumor on the street is you know your stuff, your team knows their stuff, and the studio you represent knows your stuff. It’s a frustrating problem and as long as you have to have both options at your disposal, one will suck less than the other and will get your stuff approved. Eventually. :slight_smile: