I hate those things, stuff like that lingers around in feature/TV as well, I never even got a XML/AAF always just EDLs with manually written down countsheets “timewarped to 219% at tc: XX:XX:XX:XX” … I always go really long ways to break those routines but I know thats hard and I know I am not making too many friends with this
“but we always done it that way” - I hate this sentence.
As Chris said for commercials a large majority is starting off the bat with graded plates.
But yes, every episodic TV, documentary or feature I’ve worked on has always been raw/flat usually with some sort of LUT. But there’s def been times where we’ve just had to wing it and make our own sudo-LUT with CC node for viewing.
What are the use cases for a 12 or 16bit DPX, or any camera file of better quality than a prores4444? I find that the inherent noise in acquired images overtakes any potential banding with 10 bit images (and often 8 bit images).
While there are differences between a prores4444 and a dpx, they aren’t visually apparent, so what’s the scenario where a 12 bit DPX is necessary?
in a world where comps are compared to their source files and a extremely low /zero roundtrip error for untouched pixels is required… ProRes doesnt cut it anymore would be one example ( I deal with this every day… there cant be any generational image degradation so it has to be kept losless/uncompressed every step if the way). So if you write every precomp as prores444 the image will degrade with every new generation, which can hurt the image.
In terms of banding, 12bit can be neccessary if you have log footage especially when that log stuff has low signal levels, very apparent in 8bit sony slog2 footage. Also dolby vision HDR is supoosed to be 12bit for the same reasons. But its a very fringe case where you need that much precision.
But the most important thing would be compression artifacts, I can see them when doing stuff like keying even on prores4444xq.
Technically you can shove 16bit linear (integer) sensor data loslessly into a 12bit Log container, yes the actual useable signal will probably be closer to 10, but you generally dont want to cut of anything from the sensor during recording.
For display reffered SDR grading exports, I dont see a point to go higher than 10bit.
but yes in 99% of cases you dont need anything better than prores444 in practice as you said you cant see it without really pulling on the image.
I hear that. But “back” to what? I’ve never had footage shot well enough to have 4444 compression be the dealbreaker (oh to live in that world! hahaha).
Everything we make is so disposable. I do ads, so that goes without saying, but even TV and film work is going to be forgotten in a manner of years.
One of the worst parts of digital technology is the idea that there’s a “perfect” simply because it matches the old one. It causes so many people to avoid thinking about what they are looking at. “Does it match?” is the only question that matters, and even when the answer is “yes” you can break out a difference matte and go “well, actually, at 10,000 gain I’m seeing a shift in the highlights” and cause a whole room of clients to shit their pants over something they literally cannot see.
I’d have way less issue with all this if I didn’t have to frequently talk clients off ledges over compression.
And separate from my rant here, thank you for replying in detail. I appreciate it.
The talks I had with big studio clients for feature and tv production about this… its insane
“yes there is a max error of 0.001 because we work in aces and the thats a 32->16bit rounding error”
But nope, needs to show 0 in nuke or else people get angry… there is so much stuff like that around not even worth fighting it, just pick the 12bit option and be safe really thats about it.
Nobody thinks about how the 5mbit web master will look… there is just too much…
You’ve got to love the idea that space aliens are going to come down, say “please hand us your culture,” and the studios are going to pull out a 32-bit 4k copy of some Ben Affleck movie none of us can even remember the title of.
“We’ve been saving this at it’s highest possible quality.”
Are you sure that rec709 Exrs are producing artifacts when storing the image data @ChrisKasten ?
I never heard about that before, just using a different file format should not introduce artifacts, right?
Isn’t that what the camera is doing anyway? Because you can not record 16bit linear signals fast enough on the storage medium. The Log “Compression” solves that problem nearly lossles (visually for Humans) but has the advantage to be fast to write and not so crazy on storage.
ArriRaw is already stored at 12bit log so there should be no information loss, and I think Arri states that even on their website, you can go back and forth between linear and 12bit logC (with the right ISO of course) without any information loss.
The grade upfront workflow is a thing that will probably stay a while longer, yes But I see more and more agency’s being understanding when going the grade at the end and post with LUTs route.
Personally I think if you have heavy CG involved and a lot of MattePaintings it is the best thing to do. Linear Wide Gamut or ACES files until the end and a LUT from the Color Company for postings. Everybody is happy.
If you have fast turnarounds and most of your work is cleanup and Screens or whatever it might be I personally see no reason not to work on graded files. Scheduling the color sessions is easier, you don’t have to run interations through the grade every time. You are getting final approval faster, the job get’s final approval faster.
That is of course in a commercials world.
Yes exactly, thats what the alexa is doing, I was a bit baffled reading about Sony X-OCN recording in 16bit linear… thats crazy inefficient but they might do some compression tricks that are probably giving more bits to the more important parts so its like log…
its float vs integer, if you save a 0-1 ramp in a exr and in a 12bit DPX the 12 bit DPX will have more precision.
16bit float precision goes down the higher the values are so its kinda like log internally allready, it aint too bad for 709 but I wouldnt store log in them, it for sure isnt directly compareable to 16bit integer formats so 16bit dpx really isnt 16bit exr. 16bit float or “half float” or “half precision” is 10bits per exponent step of precision, 32bit is 24bits per exponent step.
I thought so too for the longes time but its not a colorspace, its just the way they store the 16bit linear data into a log (not logC!) container.
The A/D converter drops out 16bit linear values in a integer format and those values are then stored/compressed/processed in some way. Thats not visible imagedata just raw sensor data as you said. can be really confusing.
X-OCN stores 16bit linear , DNG is I think 12 or 14bit linear (dont quote me on that…).
Redcode - no idea all secret sauce
Braw - some hybrid vodoo its not ever really raw…
The arriraw stuff is completely open in a smpte standard if you look through the arriraw page.
For the user this is transparent though its just a technicallity
There is a tool called rawdigger thats lets you basically peek inside a raw file - really interesting stuff but yea… its more science than anything
It all boils down to this:
If you think 16bit float EXR has more precision than 12bit integer DPX you are going to have a bad time
And we really need a integer log/709 optimized losless compressed codec… sigh. (only one i know is the arriraw compression…)
Hmmmm… My understanding of how raw formats work is that it’s recording the raw electrical impulses captured from the sensor. This is why things like ISO, color space and color temp can be changed after the fact when it’s processed. But I’m no expert. It’s not easy to find a simple explanation. This image from that ARRI link I posted above suggests what I’m saying…
@greg as @finnjaeger already said, the sensor data would be analog at the very first, the photosites of the sensor are, that is digitally converted and stored in log. The good thing with log encoding is that it is working similar to our brightness perception and you don’t loose visual data, at leas not for us humans. Basically a lot of detail in the shadows and mids and not much in the highlights.
Log C is the log derivate Arri developed and every Camera Manufacturer has his own Log curve Slog for Sony, Red Log etc. because that curve takes the Sensor sensitivity and the Digital Conversion of the Analog Signal into account and is tweaked for the best performance or dynamic range. Arris Log-C is different for different ISO levels for example.
Linear is great but a very un-effective in regards of bits per luminance values.
yea thats all correct , but see this it explains it further:
SMPTE
RDD 30:2014: ARRIRAW Image File Structure and Interpretation Supporting Deferred Demosaicing to a Logarithmic Encoding
RDD 31:2014: Deferred Demosaicing of an ARRIRAW Image File to a Wide-Gamut Logarithmic Encoding.
If you have access, its not image data but the raw sensor data that is stored logarithmically, think of a raw file like a black and white image.