Hi
Has anyone seen this before? ARRI producing negative blacks in the camera rushes. Then the view transforms clamping those negative blacks.
I don’t remember seeing this behaviour before. Let me know if I’m going nuts.
Cheers
Rufus
Hi
Has anyone seen this before? ARRI producing negative blacks in the camera rushes. Then the view transforms clamping those negative blacks.
I don’t remember seeing this behaviour before. Let me know if I’m going nuts.
Cheers
Rufus
I’ve never seen negative values out of the camera. Are you sure your tag is correct? Maybe Alexa v3 vs v4?
But also when you go to rec709 that is going to clamp the top and bottom. If you need to work in SDR video but want to maintain that data you can stay in ACES SDR until your final output and do a viewing lut or rec709 conversion there.
I would not expect to see any negative values in Arri Log footage.
I do, however, get negative values when I Input Transform ArriLog > ACEScg
Not a major problem. ArriLog is HDR. ACEScg also HDR. A round trip here brings everything back to normal (so long as no one has been clamping).
The problem is your View Transform. As you mentioned View Transform is not perfect and by that you mean that it is designed to take HDR working colour space and transform it into a display format. In this case we are using rec.709/sRGB which is SDR.
I am often abusing the View Transform to get my graded footage into linear for certain compositing operations (invert it into linear and then back again to SDR). I haven’t used it to round trip an HDR bit of footage but I am not surprised it lost data.
You could try a different View Transform. Maybe the Alexa Rendering one. This transform is different from the ACES transform but might prevent the negative clipping
I am so happy that I understood what you explained here @PlaceYourBetts.
Hi Bryan @bryanb
What exactly is ACES SDR? I’ve seen it mentioned before, but I’ve only worked in ACES CG and ACES CC.
Cheers
Rufus
Mate, that’s brilliant! That actually produces a much nicer DELOG that the ACES view transform I was using, less contrast, softer image, just what I was looking for. Cheers!
aces rendering is the older k1s1 lut from arri. you can also make your own view transforms with the reveal luts or try the flame setup from baselight (tcam)
Negative values in the blacks are in fact normal, they need to pin black to some value off the sensor, so they have to decide “where the noisefloor” is, weird noise pixels below that are just beign cut off.
and then I am not 10000% sure but alexa prores is video range, meybe those are actually noise offshoots that go below legal/videorange? can check by importing them as fullrange and clamping to video range after seeign if they get cut off.
Other than that maybe its just out of your targets gamut, a very green pixel in the blacks could be out of acesCG thus creating negative pixels as well.
You can check this by converting to a linear/NativeGamut only so like instead if acesCG use linear/AlexaWideGamut , if the negatives are gone then it was that.
similar discussions here.
and thats the cool thing about syncolor, you can make your own colormanagement system mixed with all the build in transforms and aces idts, you dont Have to run a full aces workflow, many actually deside they dislike the aces output transforms and use something else, totally valid, you can also use any workingspace, i personally like doing linear/alexawidegamut as well if thats the source I was given
Are you working from an ARRIRAW or X-OCN source? If the debayer settings are not correct you can get all sorts of weird values.
The ARRI gamut is slightly wider than ACES AP1 too so I have heard of certain software creating NAN values when dealing with it, compositing in an ACEScg working space. You can Google it.
Negative values become particularly problematic when doing any multiply/division operations as each subsequent operation will throw the initial negative value further and further out. So good to deal with it before comping rather than at the end.
I haven’t watched the actual posted video as a caveat to everything I have said.
Sorry I worded that a bit weird.
You would be working in ACEScg or ACEScc. Then with a viewing LUT to ACES-SDR you would get a ‘video’ viewport to see what it will look like but you still have data that isn’t clipped.
If I am not wrong, my understanding is that the ArriWideGamut can encode negative colours, that does not necessarily mean the camera is able to capture or save it on the file but I do wonder if the underlying issue comes from the camera configuration itself, I have never seen that either so it is very very weird to me as well.
I am not very familiar, yet, with the color management in Flame but, is this rule being applied to ACEScc material instead of ACEScg or even ACES 2065-1.
If so, this may explain some strange behaviour given ACEScg is linear gamma but ACEScc is certainly NOT.
Furthermore, how do you deal with AP0 (ACES 2065-1) ?
I must confess the name of the Rule is not very helpful in this case.
From my understanding, this is not out of the camera itself. When transferring into ACES AP1, Arri Wide Gamut is actually a larger gamut so can create values outside of AP1. I’m note exactly sure how that would equate into negative values as such, but I am assuming something to do with the colour transform from AWG/LogC into ACEScg might be the culprit. I wouldn’t think this would happen debayering from ArriRAW directly into ACES (i.e. not via LogC). As the sensor values would translate directly into AP0 RGB values (bigger gamut than AWG) then this should not happen.
This is actually on my very long list of things to test so I am adding the caveat that this is my understanding of the issue but I am very open to being wrong.
I suspect the same, the tranformation function may be the issue.
It would be good to see what happens if you go from LogC > ACES AP0 > ACEScg and compare against the LogC > ACEScg.
If @RufusBlackwell was so kind as to share the original camera master file I could do a test in Nuke which I know much better than Flame.
Ya sure, I’ll upload it mañana.
PM’d
Can I get the link as well? I thought I understood the problem but my initial tests showed me I was mistaken.
So on the bright(?) side: it’s not the view transform that’s causing the clipping per-se. It happens if you convert the image to scene linear and clamp the values below zero and then convert the image back1.
It’s in the source.
You would get a gold star for being able to see the difference between the clamped image and the source without using a scope or diff matte. As Finn said above, it’s just, and I quote, weird noise pixels that are being truncated. As I toggle back and forth, the scopes jump up and down, but the image does not change. If I really zoom in I can see pixels changing in the shadows. It’s not nothing, but I’d ship either clip without concern.
But yeah, watching the scopes jump around (specifically when set to Log–they don’t move nearly as frightfully when I had them set to video) is something else.
1. The transform specifically was: “input: LogC (v3-EI800)/AlexaWideGamut to scene-linear Alexa WideGamut” and it’s inverse. I’m sure everyone this deep in this thread knows this, but color science is a minefield, so I’m annotating the map with this footnote.
That’s what Flame’s Input and View Transforms do under the hood. They all convert to AP0, then to the destination colorspace. If you look at the slightly-darker gray code-looking box on the right side of the Color Management UI, you can see all the steps each transform goes through.
You can really see it in this shot. Loads of negative data:
That’s the native ARRI LogC with the default viewing rule on. So if you’re using a view transform to convert to REC709, then the negative clamping is actually cleaning up the clip by removing half the noise in the deep shadows. Although I guess that noise would be invisible unless you raised the blacks.
This is not a project where I’m feeding shots back as they were sent, i.e. with all the orig image integrity intact. I’m doing the full VFX and grade on this and I like working with perfect noiseless images, so everything goes through Neat, and then round tripped through Topaz to get v clean source material.
Here are the scopes pre and post denoising
Pre:
Post:
You can see that the denoising process removes most of the negative data.
Interesting stuff.