Rec709 to Arri 4?

Does anyone have a decent color mgmt node recipe for going from Rec709 to Arri? I’m doing a custom node with UHD-video to ACES then an inverted LogC4-AWG4 to ACES and the color shift is just too much. It’s kicking up some really neon blues I just can’t deal with.

Oh, and the straight Input Transform is killing all the saturation.

@onlycarlyouknow -

Project Color Policy: ADSK ACES 1.1

Batch Group
 -> Color Management Node
 -> Colour Transform
 -> Custom CTF Builder
     -> display: Rec709 to CIE-XYZ
     -> primaries: CIE-XYZ to ACES
     -> camera: (invert) LogC4_AWG4_to_ACES
     -> Tagged Color Space: LogC4 / ARRI Wide Gamut 4

1 Like

Perhaps make an arri4 to Rec 709 node then invert?

2 Likes

Wont really work, you cant expand rec709 to fit into arri, even if you knew the exact transform used you cant simply invert such a heavy operation.

Its the same issue trying to get rec709 into acesCG, you need to consider the whole chain.

In the end you are filming a display showing rec709 with a alexa… so the results are as expected

UHDVideo is probably rec2020 as well, isnt it?

Depending on how this will end up, if I bake the view transform or not in the end and what kind of compin I need to be doing i would just do the regular rec709 into aces workflow

var1) inverse Display rec709 to logC4 (there are no inverses of the newer REVEAL luts that I know of, so pick the rec709 you want to invert from… its all wrong anyhow) but If you want rec709->logC4-> rec709 to look the same the first and last rec709 transforms have to be inverses of each other , thats what the inverse display does, so pick the same for both, i guess, if this goes out to color then you wont be able to provide them with realistic values.

This is analog to filming a display/monitor with a alexa

var2) convert from rec709"'texture" to working space, this makes a lot more sense mathematically, as its the same process as filming a PRINTED copy of the source with a alexa, it will look pretty “flat” as 1 in the rec709 source translates to maximum Diffuse , but so would a printed out piece of paper look like when filmed with high dynamic range cameras .

Think about reality a little bit, thats the main thing when thinking about traversing between display and scene reffered things , its not a issue about finding the right colormanagement settings, its more of a problem between actual result and visual expectaction from working with display reffered values for 30 years…

this has always been the case in reality, think also about printed backdrops, they dont really work, do they? if you only see the bg it looks real but not with people and actual lighting in front of it, then you had translights and LED walls that are more like a inverse display transform… and translights and LED walls really dont work well in HDR … go figure not enough dynamic range :slight_smile:

2 Likes

I neglected to say "View Transform . . . " And you (Finn) say, it’s all wrong anyhow. We don’t know the details of Carl’s comp, but if it gets you through the day, use it. To the casual viewer the difference is not noticeable. I have done it numerous times and I have never had a client say “You inverted a linear2Rec709 view transform. That’s unacceptable.” By the same token, my work has been pre-coloured, and it goes on TV’s and mobile devices for 06-30 seconds at a time. Yes, some things are longer, but no one watches those.

1 Like

If you do rec709 inverse view to the same rec709 view that is “fine.” you are most likely doing that so you are bypassing aces.

A better way is to apply view transform to the scene reffered images and then merge them with rec709.

Lets say you have acesCG 3d renders to put into graded plates, you need to do all operations on the cg in linear and then add a colormnaagement view transfer node at the end and then merge it over your graded backplate, that way there are NO shifts to the backplate.

if you have a mismatch between inverse view and final view transform its not fine anymore, so if you have a sRGB graphic to put into a alexa plate and then it goes to scene reffered for grading the values are completely skewed .

It all comes down to matching dynamic range/encoding before merging , making big thing small then merge is better than making small thing big and then merge then make everything small again. just like how it is with scaling

1 Like