Grain management

Thanks, will update that accordingly. Still very much learning all the nuances of Flame.

1 Like

also you should lower the Softenesses in the Difference Matte node to 0.01 then you don’t need the color correct after it.

2 Likes

and remove all those “null” nodes or replace them with Mux nodes or elbows. That is their purpose. To use actual functionality nodes as pass through is confusing.

1 Like

I cleaned up the batch per suggestions and replaced link further up.

6 Likes

I had a little play with this setup in Rec709 - I threw in a histo and a blur node after your uvmap scatter to match the size/softness in grain in this one shot of mine… For my particular case it works really well but it looks not heavy enough and was too sharp so after adding those 2 nodes before the multiply it worked a treat! I love being able to use a tiled non specific but accurate to the shot grain now! thank you… I dont understand the normalized grain node but i dont have to understand it to use it!

3 Likes

I built this setup in LogC because of my test footage. If you’re using Rec709, There is one thing that may help. The Crok_Voronoy nodes have a colorspace tag that I set to LogC to match the source footage. That had an impact on the levels of the Voronoy pattern. If you changed that colorspace tag to match your footage (i.e. Rec709) the result may be different/better. Haven’t done that test yet.

I am going to have a good dig around in this setup today :crossed_fingers:
So much good geekery in this :hugs:

1 Like

That setup is great - I had a similar setup using a baked STMap, but I’m definitely using the voronoi node from now on. One thing I usually do (if you guys are taking suggestions) is to split the noise RGB using the separate node, then pipe each channel into a grade node, including front and matte (inverted), so you have more control of the gain for each colour value, as usually highlights have less noise. It’s a bit time consuming but you can get very accurate noise profiles.
Thanks for sharing!

2 Likes

I’m happy to add suggestions like this to the next update of the setup.

To double check - you want to have grade nodes for three zones (low, mid, high), not the color channels, or both? In theory because the grain is generated from the original and normalized, it should already be pretty close, but never hurts to have an extra dial ready to go.

Jan

we do most of our work in ACEScg, does the setup work for that colorspace as well? i guess in a perfect world there would be a toggle somewhere to identify what colorspace the user would like to perform the operations in. This is awesome, thanks for all the work

Right, in a perfect world there would be a toggle. To the extent possibly I’ve tried to let project settings drive details (such as as resolution). There are notes in the setup about any manual adjustment that may be needed.

Re: ACEScg. I can test it this weekend. The existing color space tags were more for matching than specific color. More to come.

Thanks @allklier
I had a play with your setup. Sometimes it baffles me how people come up with this stuff.

The use of Spotlight? What is this doing and how did you come to choose it? I sometimes have it as an alternative to an add but I get there by fluke, happens to look the nicest.

Then your technique for splattering the grain using crok_vornoi :exploding_head:

Also can you talk to me about the normalising of the grain using divide. I have something like this in that IBK setup. It helps me to boost my colour difference matte.

Impressive. Thanks for sharing.

It should be used in linear to make all math correct.

Primaries is another question. Most of the shots will do ACES AP1 (aka ACEScg) just fine, but in some rear cases, ACES AP1 primaries can break things, so it is better to go from AP1 to AP0 (or native camera gamut) for all inputs. The problem arrives if you have something that goes out of ACEScg gamut (like very saturated neon lights) which will lead to negative values prior re-graining process

1 Like

That credit needs to be shared with @Val - he described the process and math (especially the normalization) in a different thread a few weeks ago. I built on top of that great foundation.

It took a bit of experimenting which blend mode created the best result. Some of it is knowing the math behind them, but in the end the image has to look right.

1 Like

Well thanks you you both
@val and @allklier

Very informative.

1 Like

Ok, It is pretty simple idea. We have some picture that shot with real camera. Noise level of every pixel will depend on this pixel luminance. Our denoiser would take this into account when doing it`s job, remember those noise intensity curves in Neat Video UI? When we separate noise from our picture we will get different noise levels for each pixel depends where this pixel was. By dividing this result to denoised source we will get noise levels like extracted from intensity of one (x/x=1). Multiply this by our comp down the stream will return original noise where it should be and adapts our new generated noise patches to comped parts of a picture. Making all this stuff in some kind of gamma corrected colorspace like rec709 break this math, but on some shots it can be neglected.

Hope this helps

3 Likes

In Fusion I use STmapper node with this UVmaps to get Voronoi scatter. Flame equivalent would be UV map texture in Action. Apply it to your tiled normalized noise sample with a power of 0.5 (could be 50 with a max power of 100) will give you a good scatter but the noise would be twice the size of the original. Scale it down to 50% tiled borders with 2d transform node right after Action and you should get a perfect size match. Remember to turn off any filtering in both of these operations.

hope this helps

3 Likes

In this setup I use that basic principle - Action w/ UVMap to combine the Voronoi scatter. I’ll take a look at details in the settings if this can be refined per Val’s notes.

But certainly agree that if ADSK would turn this into a node, this would be much preferable. 2024.1???

It does seem that is exactly what BorisFX did with their new Regrain node in Si. In fact, in some tutorials it’s even labeled as ‘Das Grain’, and the controls look very familiar.

6 Likes

After a bunch of testing Boris FX’s new Silhouette Grain tool, it works really, really well. Turns out there’s a reason. It is in fact based on Das Grain from Fabian Holtz under the hood.

I’m glad that someone out there is listening to us. Thank you @FriendsFromBorisFX !

7 Likes

I tried the latest Silhoutte on linux. When I go into the OFX all I get is black. Gave up.