Cinema vs Bluray masters

To throw more fuel on the fire, there is discussion that the whole CIE 1931 color perception model is inadequate and doesn’t capture all aspects of color perception as it was a pretty small sample and may not account for various metarisms. There are useful standards, but they still don’t provide all the answers.

2 Likes

In your ACES example, they do render different? But then they will look the same when displayed correctly on the same display.

So OETF, EOTF and OOTF are very different things.
OETF is the relationship between the light captured on the sensor and the numerical values assigned to it in your scene referred colour space.
EOTF is the relationship between the numerical values of the display referred colour space and how it converts into light emitted by the display.
OOTF is the relationship between the light hitting the sensor and how it emits from the display.

So the maths for EOTF and OOTF are different as EOTF also needs to take into account the OETF (and counter it effectively). Thats why I am saying the maths between a scene referred and display referred colourspace are different.

If I have time later I will try and find some white papers for you. I had another quick search and have not come across it.

As mentioned above, I think you are approaching it from a BBC and old school mindset, which is different to the SMPTE & Dolby approach. HLG makes sense to some as it is backwards compatible whilst PQ & Bt.1886 are not but to others you want the display to recreate the environment which the colour grade was conducted which HLG will not (unless it was displayed on a display with the same specifics as the grade monitor in the same environment).

Colour science is complex and there are still different approaches, even in regards to calibration. In some ways there is no right or wrong way, but at least you can refer to a standard that everyone is working to. I am finding it hard to explain it, and I do not think I am alone as some of the white papers I have read seem contradictory.

1 Like

100% agree

Thats why I want to find out what people do in practice between dark surround cinema and dimm surround 2.4 displays.

And the answere seems to be to change the signal.

Darkening the 2.6 signal or vice versa brightening the 2.4 signal. As you said seems to be more visually what one would expect between environments , “changing ODTs” so to say.

This also matches what I heard from a big german DI house.

and this directly contradicts all the “broadcast” wisdom. which is what I find so interesting.

Remember that ACES output transforms are more changing the colourspace from a scene referred to a display referred colourspace when it comes to mastering. So in a way the values are changing to reflect what the specification defines so that the monitor applies the correct EOTF. If that makes sense?

1 Like

And interestingly enough, I have often heard the HLG approach to gamma referred to as a “Broadcast” approach whilst PQ has been referred to as a “Theatrical” approach. Meaning that Broadcast based approaches take into account the vast variety of environments the material will be displayed in whilst PQ & Bt.1886 do not, they specify the environment.

yea what i mean with the aces example is exactly that, the signal is different but the light output is the same. (thats what I mean with “rendering the same”

meaning they have to both have the same ootf as scene light is not touched.

camera encoding gets converted to linear and the rec709 ODT then applies a 1/1.961 (~0.5) encoding to the signal which then gets displayed with a gamma 2.4 monitor resulting in a OOTF of 1.2 (+RRT tonemapping stuff…)

for sRGB instead of a 0.5 encoding they use 0.55? gamma encode which then results in the same 1.2 OOTF on a sRGB display.

I think from a system point of view, the following should always hold true, especially now that the whole pipeline can work in ~15 stops or equivalent of eye sight.

Start with a real life linear back-white gradient on a white sun lit wall. Film it and have it travel through the entire pipeline and display it in various environments and a viewer would perceive it as them seeing a wall with linear gradient in southern Spain.

Anything that breaks this test is an issue.

1 Like

yes, I do totally understand what aces is doing here and thats exactly where i am trying to understand or question the “why”.

whats the reason for not introducing a surround compensation into the Rendering transforms? (I actually think they are talking about that going forward) baselight has a bunch of crazy surround comp stuff in it as well.

But if we dont have surround compensation we are expected to grade that in ourselves? whixh begs the question of why not use the inherent gamma shifts between displays as a cheap surround compensation or as a better starting point for a actual re-master?

and this is exactly what rec709 is saying only works if you are warching it in a bright surround environment :smiley: but then yes it should show a linear ramp.

ina dimm surround you should "boost " the gamma giving you a non-linear response

Because how will you know what the surround is going to be?!!

If there was a dynamic metadata system that took into account sensors built into the display that assess the environment then it could potentially work but that is not the case.

Some TVs DO have a light sensor in them and automatically adjust the picture but people tend to hate them. Plus, when someone walks through the room or moves, the picture changes.

So the way around this is standards. It is not the best system but is currently the least worst. PQ & HLG have their pros and cons but at least they are both a defined standard.

well but you DO!

Going from a reference environment into another you do know the difference in surround!

Cinema has a known reference surround as does dim surround. so we know and we could adjust our image accordingly?

rec709 has a known reference surround, sRGB does too, every dispkay standard usually comes with a reference environment or else it would be pretty much useless.

Which is exactly what happens when you have a calibrated display set to a standard right?!!

exactly - but then again why are we negating this effect with changing the gamma in our signal?

So unless you had a standard for different lighting environments and displays would auto detect what the current environment is closest to then that would potentially work. What are your experiences with self-calibrating displays?!!

Because a projectors EOTF is different to a displays EOTF.

Maybe your question should more be why is the EOTF of a display different to a projector?

1 Like

I think you’re looking at this too literal and in isolation. It never is about Rec709 or a specific gamma, but about the proper hand-off. Both sides of Rec709 need to be sync for the whole system to work.

As an aside - the original version of Rec709 didn’t actually specify gamma, that wasn’t added until BT1886, and only after displays were more standardized.

yesss

But if we adjust for the difference in EOTF we reach the same ootf between the 2 which isnt right as the reference environments also change?

rec709 gamma was always pretty much set to 0.5 on the camera side, display wasnt specified but sRGB talks about 1/1.961 ~ 0.5 and people say crt gamma
was inherently something along the lines of 2.4

You are forgetting about the physical change of the way the image is being created. The EOTF in a practical environment. gamma 2.6 gives a better result in projection than gamma 2.4. Gamma 2,4 gives a better translation than 2.4 on a display,

1 Like

isnt this my exact point , why so we have a different EOTF in a projector? because our eyes want more contrast in the dark vs a dim environment , yes?

So then if we change the signal to take this whole effect out of the equation i have to ask why?