Log in a EXR

Quick PSA:

Because this seems to come up all the time, just want to write down the reason why you should not put log encoded images into a floating point codec like openEXR .

The difference between floating point and integer based codecs is basically:

Integer saves data as “full values”, for 10bit that is 1024 different values per color channel. We can say that in this case data gets saved in a linear fashion. So if we save something thats logarithmically distributed into a integer format, we are distributing the bits more efficiently, in linear - HALF of all possible values are taken up by the last stop of light.

In most comp systems that work floating point under the hood, you can have values that are above the max value 1023 also reffered to as “1” as in “full” in regards to whatever bitdepth you are writing to, this can lead a clipped values, so logarithmic encoding squeezes all those linear input values back into 0-1 or 0-1023.

Floating point on the other hand is built very differently, data is not saved linearly but inherently logarithmic under the hood, the higher the values go, the less precise it is (you can read a bunch of papers on half vs float and how it work with mantissa bits and all that jazz… ill keep it simple here)
It can save values that go way beyond 1, as again its just like log encoding but smarter.

This can also give you funky issues , when you do a simulation in houdini and are too far away from origin, the precision of distance calculations goes down so much you get awful results.

So but what does this all mean for us?

This means that saving Log (logC, acesCCT, slog3 or whatever) data in a EXR is a bad idea, 12bit integer has more precision when it comes to logarithmic data than a 16bit float EXR.

But where is the proof?

I made a 32bit floating point ramp from 0-55 , this matches the dynamic range of a arri alexa, I then converted that to logCv3(all in 32bit float domain) and rendered it out as 10/12/16bit integer and 16/32bit float.

I then heavily graded the ramp to exeggerate the banding on each:
Do try this yourself in your software of choice, i have done this exact thing in flame before as well, its the same thing.

I do believe that if you work with graded SDR material encoding stuff as EXRs is, while a bit dirty not the worst choice ever, espeically for a 10bit or even 8bit delivery, Personally id still linearlize the rec709 but thats just me and my opinion that for interoperability putting anything other than linear data into a EXR is bad practice.


10bit integer


12bit integer


16bit float


16bit integer


32bit float

Q: Ok cool, but whats the fix, if we want to have log data saved in a frame based format that offers losless compression?

A: Convert it to linear and then back on import :slight_smile:

16 Likes

Or use old-school .dpx instead. The grass was greener those days

2 Likes

RIP DPX-C

I would love for EXR to offer a 12bit integer option at some point, piz, dwaa e.t.c is all genius i would just love to have it without the float.

DPX is not really that viable anymore at 5K and higher, the size is just too ridicolous.

2 Likes

Word. I resist the urge daily to stuff graded rec709 into those things. But it’s getting harder and harder to resist when you see the size of the equivalent dpx…

It’s heresy, but I’ve got my finger in the button. Now if there were container format based publishing…

1 Like

Thank you for the explanation and examples Finn. I love a good gradient demo. :slight_smile:

1 Like

Thank you for the very helpful article.
I was using OpenEXR because of its size and the advantage of being able to pack color tags.
So far I hadn’t had any serious problems, but I should be careful in the future.

Thanks Finn, A timely discussion for me, as i’m waiting on delivery of a new internal ssd raid to use as a framestore on my mac.

I was under the impression that i would be better off transcoding 5k Red video files into EXRs on the framestore, working on those in my Aces project, then exporting those exrs to replace the rushes at the end of a project. Thus allowing me to delete terrabytes of unwanted camera files, and just keep whats needed for archive purposes in a smaller file size.

Have i totally missed the point here?
whats the recommended workflow to use such huge camera files in a project/timeline?

no thats totally fine, I would just make sure to convert the red stuff to aces before you write exrs, so that the exrs are all in a common colorspace with a linear “gamma”. Just dont write log exrs and you are golden.

I am personally a huge fan of dwaa/dwab its like prores but for exrs, very small, very good compression and you can choose the compression strength, 45 = prores444 150= jpeg size for things like previews and whatnot. I use dwaa for all commercials as its saves TONs of money.

what I usually do in resolve is to collect and trim all used files and keep those on “warm” storage and i put the camera files on LTO. You can actually trim r3d (and many other) files without re-encoding them.

2 Likes

would you do the “convert red to aces, then transcode to exr” in Flame or Resolve?

totally depends on your whole workflow and wether you need metadata etc.

Here is how I would do that kind of stuff;

Cant 100% remeber with r3d but i believe you can pick linear/acesCG or aces-2065-1 in the raw settings during the initial conform. Either that or leave it on ipp2/redwidegamut/redlog3g10 and use auto-convert to convert it all during import to whatever you want (probably acesCG).

You then have all r3ds loaded as acesCG on your sources sequence,

I would then publish those out to dwab EXRs (dwaa for nuke dwab for flame) with 25f handles and reconform my sources sequence to those published openclips/exrs.

Now you have all your sequences conformed to the published exrs, meaning you could remove the r3ds at this point.

The other workflow involving resolve would be:

→ create sources sequence in flame (using r3ds)
→ export edl/xml from flame
→ conform sources sequence in resolve
→ copy trimmed sources to new folder somwhere “all used clips” can also add handles.
→ render out exrs.
->conform exrs back in flame.

Benefits would be that you have all the metadata of the source in the exrs which is nice for nuke, resolve is generally faster and the whole color pipeline is a bit more sane.

Another benefit is that you can create the trimmed sources folder which is nice! for archival leaving all all the nonsense nobody needs and keeping stuff lean.

In general just converting r3d to exr will probably not save you much storage, the trick is to throw away all that is not needed (trimming/collecting). R3D is highly compressed (depending on the settings in camera 3:1->12:1 afaik)

Mind explaining please?

These are EXR compression types.

I know that part. I’m curious why he’s using dwaa for nuke and dwab for flame.

dwaa = 32 scanlines
dwab = 256 scanlines

dwab is better for apps that read the whole frame at once like flame, nuke on the otherhand loads each frame scanline by scanline.

similar to how piz is better for flame and zips (zip 1 scanline) is better for nuke

check compression methods on wikippedia

3 Likes

I’m too much of a moron to read wikipedia and intuit why the difference between them would affect performance in each program, so I appreciate being spoon fed. :pray:

4 Likes

I also appreciate the explanation. I had no idea.

1 Like

hungry feed me GIF

2 Likes

Sorry to revive an old topic but I’ve been thinking about this lately in regards to Flame and wanted to see if anyone could clarify a question that’s been nagging at me.

Basically, I’m wondering if flame’s Uncompressed / Raw frame store format will have these same issues or if its specific to EXRs? If any time I use an Action node flame takes me to 16fp should I basically avoid using action in anything but linear if I want to avoid floating point issues?

yes same issue.

16b float is 16b float. resolve and nuke do all processing in 32b float for a good reason, lots of flame nodes are lagging behind

it mostly matters if your stuff hits HDR mastering as banding could be a potential issue, espcially with logc4 which is 13bit integer … they squeezed in one more stop if its arriraw. (18bit adc linear saves as 13b log arriraw or 12bit prores)

if you work on graded files this is all a none issue as there is enough precision to hold the SDR image without visible degradation, as the monitor is only 10bit max anyhow. realistically 8bit for the consumers

its still good practice to go rec709->linear (not aces/tonemapped) and back, thats how nuke does that too by default

2 Likes

Yes, this came up for me the other day as well. Not with cached media, but writing out OpenClips in EXR.

What is the most typical combination of the unmanaged workflow in Flame? Log material in, tagged, OpenClip in EXR out. Right?

So if you had LogC or 3Log10 files tagged as such and then made an OpenClip with EXR 16fp, that is the combination that would be of worry in this scenario.

Three possible options:

  • Write your OpenClip in DPX 16/packed instead
  • Instead of tagging your source files, put a ColourMgmt node after source files and convert to ACEScg. Then the rest is in linear space and no issue.
  • Use EXR 32fp?

That is one of the side effects of the way Flame manages color spaces vs. most other apps (Nuke, Resolve), as data remains original and is only tagged. There pros and cons, this all is one of the cons. I’ve come to appreciate Flame’s way, but with caveats.

All that said, there is another question that’s bothering me. EXR files are intended to be used with linear data (is even stated here: Scene-Linear Image Representation). And there are some apps that implicitly linearize data on write. But not all of them.

I just did a test with Resolve, rendering out an EXR sequence from an Alexa35 LogC clip. Then comparing the original MXF file and the EXR file w/o any view transforms and they’re identical. So Resolve is not forcing EXR files to be linear gamma.

For the apps that don’t force linearization, should they? Or should they warn you if the input is non-linear?

The documentation says that the OpenEXR standard and implementation doesn’t enforce linear data input.

The other question is - if you setup your project as 16fp to start with, then I assume all the TL-FX and color space conversions happens at that precision as well. Same if you keep any batch nodes in 16fp. So rendering out in EXR 16fp isn’t any inferior, you already potentially lost earlier in the game.

Is the conclusion that if you are concerned about these artifacts - you have to keep your entire project in 32fp as well - any node in batch, and timeline, etc. Is that what we all are doing?