DPX export problem

thanks Andy, I just tried your hack, and I have to say, that it is working, can’t see why you are seeing it as a bad hack. To be honest, I still find it blurry to understand the science and complications behind this terrible trinity : Input Colour Space Vs Working Space Vs Viewing Lut, how is it possible that there is no one right way to load media, compose and output. Coming from a company that uses only in-house softwares (and Flame) we used to use only Cineon and Dpx, the science behind them were simple and cute, now I feel that no matter what is my next combination in any future project, I will commit probably the same errors…
One final info is that we had the confirmation that the used camera for the live action part was an Arri.

Thanks again Andy for the time and effort, this is more than amazing to have people like you and like all the others who helped. Another reason not to switch to any other compositing software.
Respect

2 Likes

At the end of the day, if it’s working, it’s working. :sweat_smile:

The reason I call it a hack is because using a view transform isn’t lossless. They’re designed to take HDR color spaces like ACES or LogC and convert them into a range that looks nice on a monitor. By inverting them, you do promote the image to the HDR space but it may have issues. It can clamp things and cause other problems depending on the view transform being used. It’s a last resort, but I still employ it with some regularity as it’s the only way I know to keep the image looking the same across unknown color spaces.

There is a “right” way, but you need communication and coordination for it to work–something that isn’t always present in our field. Clients need to tell you “this is the camera we used, this is the color space the camera was set to” and then you can build out accordingly. In this situation, Flame’s color management works fantastically well.

But if you are in a situation where you’re unsure of how the images were created before you got them it can be very hard to sort out how to make them work.

It can get complicated in so many ways. Nuke still uses a basic gamma correction for it’s “linear” monitor lut and conversions by default (You can load up OCIO and use ACES, but you have to know that you want to use ACES and how to turn it on). That in turn causes CG departments to ignore ACES and just hack it all together like it’s 2012 when all you needed to linearize things was to remove the gamma encoding. Flame, similarly, boots with the “Legacy” color policy in place even though it is by far the least useful one available.

It’s tough, because color space is both frustrating and boring to most people. Given how many tiny things have to be re-learned I don’t blame people for tuning out. But hopefully we’ll all understand it a little more, day by day, until it’s as obvious to us as anything else we do.

1 Like

and just to answer the input/working/viewing question:

Input color space is whatever the upstream colorspace of the node is. So if you have an Arri LogC clip, the input color is “Arri LogC” for example.

Working color space is the “node-output” colorspace. It’s called Working as it’s usually the color space you want to work in. Usually this will be a form of linear. The one I recommend is ACEScg because it’s designed to be useful and to convert to other color spaces well.

Viewing luts convert the working color into color that looks correct on your monitor. If you have an ACEScg image it may have values over 50, but the monitor will only display values between 0 and 1, so the viewing lut converts those values so the image looks good on your monitor. I think of viewing luts as sunglasses–they alter the light so you can see the image more effectively.

2 Likes

To make sure this is wrapped correctly, in the case where I am using mainly their ARRI footage with or without 32 CG elements, do I have to do any special conversion since I am loading natively their log clips and working on them? Do I have to apply any Colour Management Nodes to the ARRI footage when I load them, or do I have to do the same when I write them or render them?

1 Like

It can take a village of "Andy"s. :wink:
I was helping out on this on Facebook. Figured I’d add to this thread, since search is better here than on Facebook.

Simplified notes from that discussion:

Getting 32b data from EXR to DPX without banding.

I used to create lot of custom transforms for LUTs but rarely do that anymore.

I mostly stick w inputTransforms and occasionally viewTransforms.

Keeping simple, gives less headaches and clear workflow in most cases.

ColorManage types:

TagOnly : Doesn’t change data, only the tag in metadata. Bypass in viewer will show w/o viewLut

InputTransform : Change from one source to another. Go back and forth btw linear and log spaces. All comp work here.

viewTransform: This is what you use to go to/from display colorspaces and baked in viewLuts. To/From sRGB, rec709, etc.

Camera data on the sensor is natively logarithmic and ranges from 0 (no info) to 1 (full white).

The raw data is accurate but displays incorrectly.

Apps, like Flame, use a ViewLut so image looks correct but doesn’t change the data.

Log data is great for camera sensors and color, but most comp work should be linear. (preferably w a large gamut/ACES)

General workflow notes:

  1. Bring in log plate. Make sure it is tagged correctly. Log looks flat as you know. If you have a log viewer on, hit bypass and it’ll show the raw values.

  2. ColorManage node set to InputTransform . Settings should be “from Source” to AcesCG. You could pick another linear space, but this is best option.

  3. Comp w your linear exr CG.

  4. Duplicate previous InputTransform and invert back to log.

  5. Export DPX. Good practice to T-click orig plate to match clip length/color tagging/timecode etc.

This should work for almost all comp work. Some comp tasks work better in log. Grain is one example. In that case going from AcesCG to AcesCC will get you a log space. Grain and then convert back to AcesCG. There should be no loss if the luts are 32 bit. Once you get used to, it becomes second nature.

An exception to this rule is when you are supplied graphics. To insert in a phone screen as an example. These will likely be supplied as sRGB or rec709. These are Display Referred color spaces. To get these into your linear space, use a ViewTransform lut (not inputTransform) and go from “source” to AcesCG or similar.

If export DPX looks flat when reimported, it’s because it’s not reading the log tag correctly.

Add a “tagOnly” lut and (let Flame know it’s ACES_CC, etc. )after import to fix. Then Flame will display in viewPort correctly. Bypass will show you the raw data so it’ll go back to that flat log look.

6 Likes

I know I will drive you crazy but I have these confirmations though:

1-when I bring in my log plate, how I can “tagged correctly”? you mean in the loading options, I have to choose in the color management —> Tagged Colour Space —> Cameras —> ARRI —> LogC ?

2-I bring a color management node and set it to input transform and inside Inout Colour Space —> from source ----> ACEScg
Working Space I leave it as it is (Unknow)?…it appears flat, with Bypass viewer lut

3-I bring in my EXR, I have to tag them as … ACEScg?

4-OK for Graphics

5-I comp, using all these elements or none of them

6-Before I write or render, I bring back the same color management node, and I just hit invert, it will swap between Input Colour Space and Working Space… BUT as soon as I hit invert the picture disappear and shows No result in red…

Any idea?

Yeah. On import, in the “Color Management” area, select “Tag Only” and then pick the correct colorspace for the image. It won’t break anything to not do this, but it does make life easier down the road. Tagging does two things: tells CM nodes (and maybe a few others) what the incoming color space is, so you can use the Input setting “From Source” as opposed to specifying it for each node.

The other big thing it does is switch the monitor luts automatically, which will allow you to view multiple colorspaces correctly without having to manually switch monitor luts.

But again, tagging does’t change any pixel values, so even though I strongly recommend tagging everything, it won’t break anything if you don’t.

Bypass is going to turn off any monitor lut and is the same as if you were assuming the image was rec709. So all video images look good with Bypass on, but log will be low contrast and linear will be dark and high contrast.

There’s only two menus on the input transform. Input should be “From Source” (assuming it’s tagged correctly) and Working should be “ACEScg”

You don’t HAVE to, but it helps. That said, I don’t think the EXRs you have are ACEScg, because the one I looked at did not look good in ACEScg. This gets frustrating in Linear, because if the CG department is rendering files in an old manner you may have to jump through some hoops to get them to look good under ACEScg. I still recommend doing this, because if you can get an image looking good in ACEScg, you can convert it to ANYTHING.

It should not be doing that. I believe it’s because one of the inputs is set to “Unknown” and Flame can’t invert the color space if it doesn’t know what the color space is.

1 Like