Grain management

I’m happy to add suggestions like this to the next update of the setup.

To double check - you want to have grade nodes for three zones (low, mid, high), not the color channels, or both? In theory because the grain is generated from the original and normalized, it should already be pretty close, but never hurts to have an extra dial ready to go.

Jan

we do most of our work in ACEScg, does the setup work for that colorspace as well? i guess in a perfect world there would be a toggle somewhere to identify what colorspace the user would like to perform the operations in. This is awesome, thanks for all the work

Right, in a perfect world there would be a toggle. To the extent possibly I’ve tried to let project settings drive details (such as as resolution). There are notes in the setup about any manual adjustment that may be needed.

Re: ACEScg. I can test it this weekend. The existing color space tags were more for matching than specific color. More to come.

Thanks @allklier
I had a play with your setup. Sometimes it baffles me how people come up with this stuff.

The use of Spotlight? What is this doing and how did you come to choose it? I sometimes have it as an alternative to an add but I get there by fluke, happens to look the nicest.

Then your technique for splattering the grain using crok_vornoi :exploding_head:

Also can you talk to me about the normalising of the grain using divide. I have something like this in that IBK setup. It helps me to boost my colour difference matte.

Impressive. Thanks for sharing.

It should be used in linear to make all math correct.

Primaries is another question. Most of the shots will do ACES AP1 (aka ACEScg) just fine, but in some rear cases, ACES AP1 primaries can break things, so it is better to go from AP1 to AP0 (or native camera gamut) for all inputs. The problem arrives if you have something that goes out of ACEScg gamut (like very saturated neon lights) which will lead to negative values prior re-graining process

1 Like

That credit needs to be shared with @Val - he described the process and math (especially the normalization) in a different thread a few weeks ago. I built on top of that great foundation.

It took a bit of experimenting which blend mode created the best result. Some of it is knowing the math behind them, but in the end the image has to look right.

1 Like

Well thanks you you both
@val and @allklier

Very informative.

1 Like

Ok, It is pretty simple idea. We have some picture that shot with real camera. Noise level of every pixel will depend on this pixel luminance. Our denoiser would take this into account when doing it`s job, remember those noise intensity curves in Neat Video UI? When we separate noise from our picture we will get different noise levels for each pixel depends where this pixel was. By dividing this result to denoised source we will get noise levels like extracted from intensity of one (x/x=1). Multiply this by our comp down the stream will return original noise where it should be and adapts our new generated noise patches to comped parts of a picture. Making all this stuff in some kind of gamma corrected colorspace like rec709 break this math, but on some shots it can be neglected.

Hope this helps

3 Likes

In Fusion I use STmapper node with this UVmaps to get Voronoi scatter. Flame equivalent would be UV map texture in Action. Apply it to your tiled normalized noise sample with a power of 0.5 (could be 50 with a max power of 100) will give you a good scatter but the noise would be twice the size of the original. Scale it down to 50% tiled borders with 2d transform node right after Action and you should get a perfect size match. Remember to turn off any filtering in both of these operations.

hope this helps

3 Likes

In this setup I use that basic principle - Action w/ UVMap to combine the Voronoi scatter. I’ll take a look at details in the settings if this can be refined per Val’s notes.

But certainly agree that if ADSK would turn this into a node, this would be much preferable. 2024.1???

It does seem that is exactly what BorisFX did with their new Regrain node in Si. In fact, in some tutorials it’s even labeled as ‘Das Grain’, and the controls look very familiar.

6 Likes

After a bunch of testing Boris FX’s new Silhouette Grain tool, it works really, really well. Turns out there’s a reason. It is in fact based on Das Grain from Fabian Holtz under the hood.

I’m glad that someone out there is listening to us. Thank you @FriendsFromBorisFX !

7 Likes

I tried the latest Silhoutte on linux. When I go into the OFX all I get is black. Gave up.