Grain management

Hi Peeps,
I am working on some film shots and i need to manage grain.
The nuke folks use 2 approaches.
All layers are denoised through neat video first.
Then they use something called dasgrain. They also use the merge node with the from operation to extract the grain from the denoised clip and original clip and then combine this result with the final denoised comp using another merge node with the over operation. They get the original grain back. I am trying to mimic this using difference, crok difference and blend and comp with subtract and difference operations. When I combine this result with my final comp I’m not getting the original grain back. Is there any way to mimic thr nuke workflow., thanks in advance

1 Like

We don’t have DasGrain, we want DasGrain, we have no means seemingly of making it ourselves, though people tried/ are trying I believe. So no das grain. To mimic the non-DasGrain approach- sometimes referred to as “grain theft:”

1: Neat video as front input in comp node
2: Original plate as back in comp node
3: Set this comp node to ‘subtract’
4: To put grain back, plug the result of the subtract comp node into another comp node further down the chain
5: Plug your comp you’ve been doing on the neat video into the back
6: set this comp node to ‘add’- should be back to original grain now

*Input order is important on the “subtract” comp node, neat video in front, original plate in back. Input order doesn’t matter on the “add” comp node. Also be aware that you may need to do some fixes after the subtract node on the grain pass you’re going to add (I generally use 2d transform/ source front methods) to avoid ghosting or to have the correct quality of grain for the luminance (this is why we want DasGrain)


Thanks @BrittCiampa
Yeah i use to wonder how the nuke people do it and why flame regrain just doesn’t cut it.
Now its working.

I noticed if i do a blue screen comp with two different grain patterns for the fg and bg its a bit of a mismatch, i did a workaround, i do 2 sets of comps. One for the comp with denoised fg and bg and one set with a comp of the original fg and bg. I then plug these into the comp nodes

The new grain tools in Silhouette work well.
Their Regrain node looks a lot like DasGrain to me.

For regraining areas where you cant add the original grain I always use Crok Renoise.


In the quest for solving that puzzle on Flame, I’ve been experimented with an expanded version of what @BrittCiampa described and with @val’s excellent flow from another thread. I’ve only had a chance to test it on a limited number of images myself, but would be happy to share an archive if someone wants to play with it.

The two additional steps I’m using:

  • Emulating the temporal sampling nature of Das Grain I use 6 (you can vary that number) mux notes to sample different frames and mix them together as a broader grain sample than a single frame.
  • Then using a 256x256 grain patch from that sample, I use crok_vornoi to uv_map the noise from the tile pattern to achieve a more uniform noise without visible tiling.

It’s not a set and forget batch, it has 6 steps to adapt it your image.

This may all need more testing and vetting. But maybe it has legs.

Probably can’t be turned into a matchbox because some of the steps. But maybe it could be turned into pre-process / main-process pair (like IBK in Nuke).


please share archive


Yepp, it`s what I described. But I want to add some small touches to your setup
Red dots are id_Edgy matchbox shader. It finds any pixel with a zero and replaces it with something like 0.00001 or as close to 0 as possible. Divide/Mult by 0 is not a good thing in this case.
Yellow dots are CC nodes that are linked to each other, we will need to alter only the gamma setting to help blend between original grain and generated grain where comp matte is half transparent

PS How does the temporal sample mix part of your setup work?

PPS Some shots works better when performing all of math in log, not in linear


Link to archive: Dropbox - - Simplify your life (updated on 5/1)

@Val to your questions:

It’s 6 freeze frames which you can strategically pick across your your clip (have to edit the frame # in those Mux nodes, the default to 10 frames apart). Then recombining them with the comp nodes in SpotlightBlend mode, which seemed to be the most true to original. Maybe better way of doing it.

Current batch is based on a LogC clip which is included in the archive. Haven’t tested in linear.

I did add the Id_Edgy per your notes. If I decoded right, you set the highlight color via HSL to (0,0,0.00001) in 16fp, correct? I looked at the GLSL of the shader to see how it works.

PS: not sure why the forum put all this extra text into that link, I did not name that file ‘simplify your life’ :frowning: That seems to be some automated advertising the forum template adds?


I use timewarp node for this. Set method to “nearest frame”, and set keyframe repeat mode to “loop”. Not sure it`s better, but at least one node to manage.

Yepp. Original matchbox was made to find black borders that can be left after stabilization or such a thing.


ooh thats smart to generate a 256 sq patch to map the noise back on. good way of getting around the ghosting problem with the sub and add technique of regrain. ill have to check this setup out next week. thanks for sharing!


Thanks for sharing… playing with this. Here is my feedback for making the 256x256 crop a bit more clear.


Thanks, will update that accordingly. Still very much learning all the nuances of Flame.

1 Like

also you should lower the Softenesses in the Difference Matte node to 0.01 then you don’t need the color correct after it.


and remove all those “null” nodes or replace them with Mux nodes or elbows. That is their purpose. To use actual functionality nodes as pass through is confusing.

1 Like

I cleaned up the batch per suggestions and replaced link further up.


I had a little play with this setup in Rec709 - I threw in a histo and a blur node after your uvmap scatter to match the size/softness in grain in this one shot of mine… For my particular case it works really well but it looks not heavy enough and was too sharp so after adding those 2 nodes before the multiply it worked a treat! I love being able to use a tiled non specific but accurate to the shot grain now! thank you… I dont understand the normalized grain node but i dont have to understand it to use it!


I built this setup in LogC because of my test footage. If you’re using Rec709, There is one thing that may help. The Crok_Voronoy nodes have a colorspace tag that I set to LogC to match the source footage. That had an impact on the levels of the Voronoy pattern. If you changed that colorspace tag to match your footage (i.e. Rec709) the result may be different/better. Haven’t done that test yet.

I am going to have a good dig around in this setup today :crossed_fingers:
So much good geekery in this :hugs:

1 Like

That setup is great - I had a similar setup using a baked STMap, but I’m definitely using the voronoi node from now on. One thing I usually do (if you guys are taking suggestions) is to split the noise RGB using the separate node, then pipe each channel into a grade node, including front and matte (inverted), so you have more control of the gain for each colour value, as usually highlights have less noise. It’s a bit time consuming but you can get very accurate noise profiles.
Thanks for sharing!