I find it pretty hilarious that we talk about “encompassing more” “more gamut” something something “not clipped”
Think we first need to understand what this “gamut” even means and how it mathes out.
In general the primaries just tell us “if you have full green 0/1/0 in RGB thats here on XYZ” XYZ then directly reffering to “the human observer” (which is all kinda BS in a way anyhow but whatever).
So lets imagine the full-green color of 2065-1 -
now we define a “smaller” gamut like rec709, can we make any color value relative to rec709 point at the same XYZ coordinate that the full 0/1/0 AP0 primary sits at?
What is “more green” than full rec709 green ? Hell we just subtract red and blue - negative values baby .
So yes - guess what - we can actually describe any color on XYZ using any set of primaries we want to, no problem ist all “relative” we just scale the numbers relative to these 3 abritary primaries.
So by that logic why not use XYZ directly? (hint; DCPs)
Regardless of these theoretical things, i think know why shows force people to use AP0 even tho its not recommended,
-
Lack of understanding " its bigger so its better" see the above
-
Wrong scope of reference when talking about QC, and i think thats the biggest one here.
Lets say you ingest your arriraw directly to AP0 , what happens here ? the native camera gamut values are beign plotted on XYZ and then described using AP0 / cool so now these AP0 values are beign described as the “original media” that we compare the future compositing output to.
These can allready have negative values to describe values outside of ap0 that a camera might still be able to encodex
A raw doesnt have any “original picture” its a data-soup just to sidetrack.
Now we take this AP0 lets say 16b float exr and throw it into nuke withba acesCG workingspace what happens →
-
32bit working space, 16b source so there is a upconversion where the space between samples needs to be filled with something and new values are made up with 32bit precision, so now you have 32bit float data internally, relative to AP1/acesCG.
-
Now we write out a 16b acesCG precomp , bow our 32bit interpolated values are beign rounded to 16bit precision again.
-
you load that 16b acesCG file back in and continue compositing , notice we now have a rounding error to our original plate.
-
you now export a AP0 exr at the end of your comp but you still have a rounding error against the original 16bf AP0 data.
This would easily lead someone to believe “we have to do everything in AP0” as they dont recognize the difference as a rounding error.
However - if you take the AP0 data and then turn it back to camera native you will also have a rounding error there allready, if you would have just gone camera ->ap1 directly and then leave it there you would end up with the same result.
So its a pixel-fudger issue in my opinion due to lack of understanding.
If you really think that nobody should touch “camera encoding” then you should use your cameras native gamut instead, why not use linear/alexawidegamut4 as your exchange format in the first place, (in fact i worked on some shows that did that, snd i heard its becomming more and more popular)
In my personal opinion 2065-1 should be erased from existence. But thats just me, i dont think we need it, in my facility exrs are always acesCG (but i dont do longform anymore but ive been there done that got the qc reports)
Next thing would be to dive in “why” the size of your working space gamut matters, because thats pretty interesting as well (energy ratios and some fun stuff to be found there) .
Some fun things to think about with the math just 2 of the things , if tou consider things like energy conservation and stuff in 3D renders stuff becomes a bit more complex but here are just zw examples i can think of why the math matters and why we “need” a narrower shaped working space :
-
When you interpolate between two colors (for example, crossfading, soft edges, glows, blur), color values in a massive gamut like AP0 can pass through regions of “nonphysical” or “imaginary” color — resulting in weird hues, saturation shifts, or banding.
-
Poor Color Grading Response
If you use a huge gamut, simple grading operations (like increasing saturation, pushing midtones warm, etc.) behave very unintuitively. A slight change can cause huge hue shifts or saturation changes because the space is “too loose” — the axes don’t match human perception tightly enough.