Log in a EXR

Quick PSA:

Because this seems to come up all the time, just want to write down the reason why you should not put log encoded images into a floating point codec like openEXR .

The difference between floating point and integer based codecs is basically:

Integer saves data as “full values”, for 10bit that is 1024 different values per color channel. We can say that in this case data gets saved in a linear fashion. So if we save something thats logarithmically distributed into a integer format, we are distributing the bits more efficiently, in linear - HALF of all possible values are taken up by the last stop of light.

In most comp systems that work floating point under the hood, you can have values that are above the max value 1023 also reffered to as “1” as in “full” in regards to whatever bitdepth you are writing to, this can lead a clipped values, so logarithmic encoding squeezes all those linear input values back into 0-1 or 0-1023.

Floating point on the other hand is built very differently, data is not saved linearly but inherently logarithmic under the hood, the higher the values go, the less precise it is (you can read a bunch of papers on half vs float and how it work with mantissa bits and all that jazz… ill keep it simple here)
It can save values that go way beyond 1, as again its just like log encoding but smarter.

This can also give you funky issues , when you do a simulation in houdini and are too far away from origin, the precision of distance calculations goes down so much you get awful results.

So but what does this all mean for us?

This means that saving Log (logC, acesCCT, slog3 or whatever) data in a EXR is a bad idea, 12bit integer has more precision when it comes to logarithmic data than a 16bit float EXR.

But where is the proof?

I made a 32bit floating point ramp from 0-55 , this matches the dynamic range of a arri alexa, I then converted that to logCv3(all in 32bit float domain) and rendered it out as 10/12/16bit integer and 16/32bit float.

I then heavily graded the ramp to exeggerate the banding on each:
Do try this yourself in your software of choice, i have done this exact thing in flame before as well, its the same thing.

I do believe that if you work with graded SDR material encoding stuff as EXRs is, while a bit dirty not the worst choice ever, espeically for a 10bit or even 8bit delivery, Personally id still linearlize the rec709 but thats just me and my opinion that for interoperability putting anything other than linear data into a EXR is bad practice.


10bit integer


12bit integer


16bit float


16bit integer


32bit float

Q: Ok cool, but whats the fix, if we want to have log data saved in a frame based format that offers losless compression?

A: Convert it to linear and then back on import :slight_smile:

13 Likes

Or use old-school .dpx instead. The grass was greener those days

2 Likes

RIP DPX-C

I would love for EXR to offer a 12bit integer option at some point, piz, dwaa e.t.c is all genius i would just love to have it without the float.

DPX is not really that viable anymore at 5K and higher, the size is just too ridicolous.

2 Likes

Word. I resist the urge daily to stuff graded rec709 into those things. But it’s getting harder and harder to resist when you see the size of the equivalent dpx…

It’s heresy, but I’ve got my finger in the button. Now if there were container format based publishing…

1 Like

Thank you for the explanation and examples Finn. I love a good gradient demo. :slight_smile:

1 Like

Thank you for the very helpful article.
I was using OpenEXR because of its size and the advantage of being able to pack color tags.
So far I hadn’t had any serious problems, but I should be careful in the future.

Thanks Finn, A timely discussion for me, as i’m waiting on delivery of a new internal ssd raid to use as a framestore on my mac.

I was under the impression that i would be better off transcoding 5k Red video files into EXRs on the framestore, working on those in my Aces project, then exporting those exrs to replace the rushes at the end of a project. Thus allowing me to delete terrabytes of unwanted camera files, and just keep whats needed for archive purposes in a smaller file size.

Have i totally missed the point here?
whats the recommended workflow to use such huge camera files in a project/timeline?

no thats totally fine, I would just make sure to convert the red stuff to aces before you write exrs, so that the exrs are all in a common colorspace with a linear “gamma”. Just dont write log exrs and you are golden.

I am personally a huge fan of dwaa/dwab its like prores but for exrs, very small, very good compression and you can choose the compression strength, 45 = prores444 150= jpeg size for things like previews and whatnot. I use dwaa for all commercials as its saves TONs of money.

what I usually do in resolve is to collect and trim all used files and keep those on “warm” storage and i put the camera files on LTO. You can actually trim r3d (and many other) files without re-encoding them.

1 Like

would you do the “convert red to aces, then transcode to exr” in Flame or Resolve?

totally depends on your whole workflow and wether you need metadata etc.

Here is how I would do that kind of stuff;

Cant 100% remeber with r3d but i believe you can pick linear/acesCG or aces-2065-1 in the raw settings during the initial conform. Either that or leave it on ipp2/redwidegamut/redlog3g10 and use auto-convert to convert it all during import to whatever you want (probably acesCG).

You then have all r3ds loaded as acesCG on your sources sequence,

I would then publish those out to dwab EXRs (dwaa for nuke dwab for flame) with 25f handles and reconform my sources sequence to those published openclips/exrs.

Now you have all your sequences conformed to the published exrs, meaning you could remove the r3ds at this point.

The other workflow involving resolve would be:

→ create sources sequence in flame (using r3ds)
→ export edl/xml from flame
→ conform sources sequence in resolve
→ copy trimmed sources to new folder somwhere “all used clips” can also add handles.
→ render out exrs.
->conform exrs back in flame.

Benefits would be that you have all the metadata of the source in the exrs which is nice for nuke, resolve is generally faster and the whole color pipeline is a bit more sane.

Another benefit is that you can create the trimmed sources folder which is nice! for archival leaving all all the nonsense nobody needs and keeping stuff lean.

In general just converting r3d to exr will probably not save you much storage, the trick is to throw away all that is not needed (trimming/collecting). R3D is highly compressed (depending on the settings in camera 3:1->12:1 afaik)

Mind explaining please?

These are EXR compression types.

I know that part. I’m curious why he’s using dwaa for nuke and dwab for flame.

dwaa = 32 scanlines
dwab = 256 scanlines

dwab is better for apps that read the whole frame at once like flame, nuke on the otherhand loads each frame scanline by scanline.

similar to how piz is better for flame and zips (zip 1 scanline) is better for nuke

check compression methods on wikippedia

3 Likes

I’m too much of a moron to read wikipedia and intuit why the difference between them would affect performance in each program, so I appreciate being spoon fed. :pray:

3 Likes

I also appreciate the explanation. I had no idea.

1 Like

hungry feed me GIF

2 Likes