Aces 2065 - acescg

Hi Guys,

I’ve been reading in some papers the ACES I have to use when export my material to NUKE is the ACES 2065. ACES 2065 acts like a transfer file between softwares. Actually, I’ve been keeping my workflow entirely ACESCG. Conform and distribute the media in ACESCG and for us is running well so far. We cant find any difference between ACES2065 to ACESCG.

We don’t have Nuke in our pipeline, only AE and lame. Does anyone know if it a strict rule to use 2065 to transfer media between different platforms?

Tks,

1 Like

First of all, let’s talk about the differences between “ACES AP0” aka “ACES 2065” and ACES AP1 aka ACEScg. Both colorspaces have linear gamma curve, but different primaries. ACES AP0 was invented as a connected color space with primaries that cover all of the human visible spectrum, and ACES AP1 has narrow primaries that more manageable for comp and grading. It`s not a good idea to doing work in wides possible colorspace (AP0) because you potentially can generate out of visible spectrum colors, and, when you color management system will deal with them you will have some artifacts. At the same time, it is a good practice to use the widest possible color space for storage and exchange – it will give the color management system some overhead for doing colorspace transforms and not introduce any rounding errors.

In short – using ACES AP0 for exchange/archiving and ACES AP1 for work will save you from potential errors, which are rear, but sometimes happened.

Hope this helps.

4 Likes

If you imagine each color space is a car of a different shape, then ACES 2065 (aka “Big ACES”) is a garage large enough to turn any car around, no matter it’s size or shape. It’s just for that.

Maybe a train roundhouse is a better metaphor, but anyway, it’s a bigass space who’s only purpose is to store shit or to allow enough space so you don’t chip the paint when turning the car around. My metaphor is horrible. Haha.

anyway, nothing wrong with using ACEScg as an exchange space, it and every other wide-gamut space (rec2020, Alexa Wide Gamut, etc…) is big enough that you’re not losing color in any real way.

I frankly prefer it in the same way I’d rather have food delivered than I would one of those cook-your-own-dinner kits. With ACEScg, I can get right to work. It’s a great format.

6 Likes

garage analogy is great.

Amazing Val…got it…thank you

Thank you Andy…quite clear for me now…

You welcome

Here are some measurements I took using the aces Provided synthetic test image simulating a source->aces-> source roundtrip.

Keep in mind the input value of that testchart are completely bonkers so it really shows the limits of 16bit encoding.

Pretty interesting stuff, keep in mind this is nukes OCIO implementation specifially.
basically you want to test your full pipeline for any set of plates, thats what we do in delivering “version zeros” to the client, which means dragging the plate through the whole pipe and back, thus no suprises after work has been done .

2 Likes

Hi gang,

My first one on this amazing forum.

And what about acesCCT ?
We are using acesCG for our in house transfer and work but I have too many issues with the resize node in my HD TL.
Now, I use a ´pre’ ColorManag Cg to CCT - resize to HD.

I am wondering if i should change my thought : having all my pipeline in CCT for TLine, grade and beauty work and transfer only to CG when’s it makes sense in batch (blur/comp/CG…).

Any thoughts ?

Franck

2 Likes

The job I just finished this is how it was done. Timeline CCT, batch/CG acesCG. I prefer log usually so it worked, but I’m not a fan of working in ACES. This spot had a very extreme “look” and the Gamut kept breaking the more extreme grades/comps, sorry sidetracked. But when given the opportunity I like to have the timeline in log and comp “mostly” in linear of some sort.

Oh yeah. Welcome to the forum Franck

2 Likes

I would say thats a valid way of working, just keep in mind that acesCCT cant hold all the information it clips values above I think 100 linear (you can check that by making a white solid with the linear value of 150 and then run colormanangement from acesCG to CCT and take a look at the output, it going to be the same as linear 400.

Then there also is the issue of not beign able to use EXR , its very bad with LOG (its a long explaination, but 16bit Float EXR is worse than a 12bit DPX for Log , you can actully see that when you encode a ramp in log, write the same thing to EXR and DPX and then gamma down to see banding/stepping)

DPX only stores information from 0-1 though so you could potentially loose negative values (out of AP1 gamut colors) although I dont know exactly if DPX stores those, want to find out though. just one potentionall pitfall depending on source footage.

I preffer EXR so much so that I would not trade that for the convenience of not having to use CM nodes in batch. lossless compression works great, as does DWAA, its as light as a 8bit JPEG sequence but keeps linear values which is insanely usefull, for example as maya backplates or quick proxy playback.

Pretty sure internally even when its in LOG state in flame it should be handled in 32bit Float , at least thats how it is in nuke, so going back and forth as you did in batch should be losless.

I would advice to test any workflow using the synthetic testimage from the academy, it can show the limits of 16bit EXRs even as you see in my image above.

You can download that one here:

They even have references of correctly transformed testimages this is made for Software developers implementing aces to check their stuff.

What would be interesting however is - if you keep linear EXR files and then Autoconvert them to acesCCT on flame import, how they are then cached.

There is also nothing telling us we cant use acesCCT as a working space in flame although I dont know enough of the inner maths of flame nodes… All in all very interesting topic as I like LOG a lot.

One thing that allways baffled me was how even a pure linear tool like nuke is not applying a lin->log conversion before resizing an image, ideally flame resize would be colorspace aware (like image node). Sadly lots of tools in flame arent colorspace aware and are made to work on display-reffered material.

4 Likes

I"d love to hear the long explanation of exr log vs dpx log, I had no idea. Maybe a new thread. But well worth it. I don’t usually work with values of more than 10 or so which is still too high for me, nevertheless I’d love to know more about this format discrepancy. Thanks @finnjaeger

Just for context, you can get values as high as 65 from a alexa and its easy to get negative values for chroma backgrounds when converting a large camera gamut to AP1 and even AP0 , for CG stuff like a HDRI its easy to go up to 10000

Here is a aces2065-> AcesCCT -> aces2065 in flame, right side is after the transform so there is some stuff beign cut off , the left side things that appear are high negative values in the test-chart . with everything set to 32bit as far as I can tell.

.

Ok so to the whole Log inside a EXR thing : as far as I understood it, its all-ready storing data internally as some log but its expecting linear values,
the way the 16 bits are distributed mean its more precise the closer you get to 0 and then 10bit per stop of exposure
the first bit is a sign-bit so saying “positive or negative number”
The next 5 bits are for storing the exponent bits ,and another 10 for the fraction bits
There also is a offset of 15 to reach zero (its a bit weird, wikipedia half float)

between 0-1 input (which is acesCCT) you would end up with a precision of just 11 bit if I am not wrong.
maximum storable number would be 65504 and minimum would be 0.000000059604645
More info here: https://devblogs.microsoft.com/dotnet/introducing-the-half-type/

On a DPX that stores data as integers linearly its quiet more simple, 10bit has 2^10 possible states, 12bit 2^12 and so on. so for 10bit 1023 is the full 100% IRE signal in 10bit.

PLEASE correct me if I am wrong here, I would really like to know if I understood this correctly.

2 Likes

I seriously doubt you’re wrong. That was a great explanation. With pictures. I deal only in TVCs, I’m constantly being told to work in AcesCG. For non HDR content my question is why. I can take Alexa LOGC Input transform to Linear Rec709 and let CG do their thing and me do mine and it comes out looking great. Is comping and rendering in Aces giving us TVC people things that make the rec709 picture at the end. look better ? I would assume not, but honestly I have no idea. Thanks again for your time and thought on this.

1 Like

there is a bit more to it, but you are also not wrong, first thing is to do colormanagement at all, most just throw in a linear/AlexaWideGamut plate and render linear/709 CG on top and then use color correction in comp to put it in the right place, goes all the way back to how textures are painted and how lookdev is beign done, things can be insanely skewed.

If you want to use linear/709 as your working space thats valid, and you can still use the great aces Tonemapping view-transforms to your advantage.

If you are using EXRs for exchange in linear/709 you are also not loosing much color information if at all, you can expand it back to alexa-wide gamut for grading with almost no loss, only thing where you need to be careful is when you use LOG in between in your comp, negative values , which you will have lots of, can lead to loss(although thats a edge case but can still happen with very saturated things and noise) .

But, there is a bit more to the aces advantage if you look into Lookdev and lighting, there are a bunch of great articles outlining the effects of Global Illumination when using higher gamut textures and it can be amazing for Neon lights and a bunch of other things that come with a extended gamut, even if your textures are still sRGB/709 .

Basically you cant recreate anything outside of 709 in CG when you do this, but your plate can, if you then pull around the image in grading it can fall apart quickly, how would you match a 3D lightsaber to a real one when its allready out of 709 gamut when filmed? Can you fix it in comp, yes you can if you are aware on how to handle negative numbers…

Basically proper lighting and texturing in a gamut close to your plate gamut will make compositing a lot easier.

Added to that , its been some great development happening for lighting that makes the whole process a bunch easier, most renderers have some kind of OCIO or aces support, I think even Octane now, I mainly use vray and its implementation was bad but got better with V5, but all in all the only missing pieces are Substance Painter and Photoshop, once those are ready there is really nothing in the way of just doing everything in acesCG.

Also, you probably also have the case of asset reuse-ability, lets say you need to deliver a HDR commercial of a previous 709 show in a few years, you want to just pull out the ACES master , do a quick trim grade and be done .

3 Likes

You’ll have troubles with Resizes if you aren’t following this…

I believe this is the ideal path. Log is great for all the timeline stuff: grades, titles, repos.

My simple version of @finnjaeger’s answer is that EXR 16 bit rounding increases as the values move away from 0. Trying to make a 16-bit image band up near black is almost impossible.

I’m still a bit shocked that you can get less banding in a 10-bit DPX, but no one’s ever complained to me one way or the other.

Yea very basic you have inside a EXR/Half Float

10bit from 0-1
10bit from 1-2
10bit from 2-4
10 bit from 4-8

Full Float is 23Bit per exponent step…

As far as I understand the distribution of values inside these exponent ïntervals" is evenly/linear
so if you put in a log signal from 0-1 youll end up with 10bit precision, if thats enough for you , there really is nothing wrong with using EXR, if that works for you its all well and nice.

1 Like