View transform vs input transform

So I’ve been comparing these 2 methods of converting to Rec709, while a view transform gives the correct colour and gamma, it does appear to add some clipping to the whites, where an input transform doesn’t, but gamma etc doesn’t match my reference. Any suggestions?

There is no correct answer. One is visually better, the other is mathematically better.

This is assuming that this is Rec709 camera material, not graphics. In that case, when the Rec709 material was rendered it was tone-mapped, which is destructive. Now you’re trying to reverse something that was lossy.

Generally speaking input transform is for original camera material (where it was recorded in Rec709 on a camera vs. LogC/3Log10/etc.). The inverse view transform is meant for material that has already gone through a color pipeline and was tonemapped. It does a better job at trying to undo that, but it remains a compromise.

1 Like

If you want to know the bare minimum…

If you want to know slightly more…

There’s a 14 day trial. Cancel after watching. Not trying to get you to sign up, it’s just that this is a super simple, bare minimum approach that may be your style.

3 Likes

Source material is Aces EXR’s, adding a show LUT and CC to match an editorial QT

Like Orlando, Florida, rec709 is a destination but you would not want to work there.

Which is to say when working on log or Lin media you should view it through the view (not input) transform, but not convert it to rec709 until output.

On output apply the view transform, which will indeed clip some values, but that always happens when converting to rec709. Since it’s a display format you don’t need the other values; you are in “if it looks right it is right” territory.

7 Likes

Andy, this is on the output prior to publishing as a QT.
Just odd that the input transform to Rec doesnt clip

Just odd that the input transform to Rec doesnt clip

I agree with the advice already shared in this thread but will add a bit of detail.

There are potentially two clamps when going from scene-referred to display-referred colour spaces. The tone-map transform compresses and potentially clamps stuff that is way out of range/gamut. That is where most of the work happens. Then the display transform may do some additional clamping (depending on the specific transforms being used).

If you’re using a typical show LUT, the tone-map and display transforms are combined and they typically clamp to [0,1]. However, in the ACES transforms that ship with Flame, the display and tone-map are separate steps and the transforms to video/display colour spaces do not clip values above 1. And as of the default ACES 2 config for Flame 2026, negative values are preserved too, mirrored around the origin. This is done in order to support legal/full/legal workflows where there is extended range data to be preserved.

So if you use the Input Transform tool to convert from a scene-referred to display-referred colour space, I would expect to see a lot of values outside [0,1] since there is no tone-map to compress out of range values (since you aren’t using View Transform) and the display transforms (that ship with Flame, anyway) don’t clamp.

6 Likes