RED 16bit Float vs 12bit Integer

So, I kinda wonder what the idea behind 16bit float and 12bit integer selector is when decoding RAW.

Shouldnt it auto select between float and integer based on the debayered EOTF?

like,… ACES/acesCG/Linear should be 16b float and everything thats gamma or log based should be 12bit integer.

If I set it to 16bit float and choose a log space , flame caches the log data as 16bit EXRs , which isnt what I want of course.

just wondering if there actually is a secret thing that makes the debayer better when set to 16bit float? The sensors AD converter should be 16bit Integer, and then REDCODE internally is probably 12bit Log like arriraw (or maybe thats a trade secret… maybe they save as 16bit linear like sony does, whichever its definetely not 16bit float)

They are some interesting points.

Here’s another one on a slight tangent…

Why do certain streamers want 16bit integer uncompressed Colour Timed Masters? They are monitored in 12bit at best, and there wouldn’t be 16bit of data information there either right?! Or am I missing something? I understand 16bit float exr files from something both graded and exported in ACES but for anything else?!!

Yes or OETF depending on what direction you look at :smiley: usually for this stuff oetf=eotf though so ootf= 1

Not sure i follow 100% but i thibk there is at least some reason to want 16bit graded masters.

Camera has a 16bit integer(linear) AD converter, usually gets saved as 12bit log (even arriraw is log internally), or as linear 16bit integer (XOCN/SonyRaw).

now we develop that raw and extract the LogC or slog3 ACES or whatever from it, now it depends on the app to how this is stored/calculated internally, afaik resolve just does everything as 32bit float, same with nuke, so plenty of data, this all happens “on the fly” so source ->32bit float ->operations-> export .

I dont know how flame does this, but I suppose its not doing the same thing and is beign limited by what each node can do, how exactly this works : not a clue.

Now you take your logC source or whatever, you grade and master it, do retouch, add cg all in a 32bit float domain, and then you export rec709 or PQ after final tonemapping and “look” and whatnot.

meanwhile you monitor everything in 10bit…

So yea good question as to why , I get it with HDR as you for 1000NIT use 75% of the signal so you want 12bir instead of 10bit masters but… 16bit … I really dont know why anyone needs this, probably just a case of “why not”