How the hell Denoising works?

Not only flame related but I’m wondering how denoising really works.
I’d like to be able to build a handmade denoise to have as much flexibility as possible but I can’t come event close.
For me median isn’t the way to go but I can’t find.

By the way, I find the flame denoising very heavy and slow by with quite good results most of the time (when available, I use neat that is for me by far the best…)

Thanks for any help.

We built a matchbox for denoising using nvidia white papers I think. Can’t remember. It was done years back. It’s on logik. My point though is we got no where near as good as neat video, gave in and then Lewis made his container and that was that.

However, I have found best results with flames built in denoiser can be done by make the sample square really small. And try to make a few areas for low mid and hi luma. With digital cameras, small grain and higher radius usually works.

Thx for the feedback

My question is more how to build one from scratch with the built in tools? What’s the theory for that.
It is so much more elaborate than a blur or a median.
I can’t figure out what’s the idea behind such a tool.
( not to mention that as shots are moving, I can’t use compound or any temporal filter)

Actually, I’d like to have hands on the settings behind the scene so I could understand how to remove different size of grain while keeping as much detail as possible…

Use the denoise node. It’s the closest you’ll get to neat. This is what I mean. I hope it’s helpful though I think I might be barking up the wrong tree.

Yes, I agree. I like the denoise node very much.

My point is more. How does a denoise work behind the scene.
If I want to build a tool like that for any reason ( let’s pretend I’d want to remove dust from a floor or that kind of task).
I’d love to be able to build my own tool for a particular job or situation.

And a more up to date one;

Its a bit beyond me though.

1 Like

So, silly question…why would you want to build your own blur/denoise? Theres lots out there that work. Are you looking for ways to do cleanup using homemade bump maps derived from blurs/differences to do, as you stated, remove dust from a floor?

Typically, blurs + mins or blurs + max blend modes and sometimes differencing a blurred plate with its originals gives a free bump map, which you can use to reveal through.


I think you really need to learn how to use NeatVideo properly in that case. It crushes Flame’s Denoise… just destroys it.

1 Like

Of course you’re correct; My point was best results with flame tools. Neat video is a winner and has been ever since Andy Dill showed me the path to enlightenment.

1 Like

Thx, Nice document. Let’s be honest, it’s way above my understanding but interesting to read though. :slight_smile:

Thx Randy.
As much as possible, I try to understand how my tools work so I can get the best of them.
I’m very interested in the background processes.
Even if I use for exemple the Sapphire Bandpass each time it’s available for me, I like to know exactly how it does to produce the result I’m looking for. The idea behind that is 1st that I know how to get to what I want with the plugin, and 2nd, even if I don’t have it available, I have a workaround to achieve the exact same result. ( sometimes with greater level of control).
2 very important things for me.

For denoising or dust clean, I have multiple ways to achieve these things, including using existing tools, either native in Flame or external like Matchboxes or plugins, but I just realised I didn’t really know how it works and was willing to understand it better.

Basically, what would be the Batch schematic that would give me the result of the Flame denoise tool.

Does that make sense?


Credit where it’s due @Alan is the original master of Neat Video in the flame community. Were it not for him, I don’t know if any of us would have it. :smiley:

Thank you @Alan and also @lewis for his spark container.

1 Like

Well shit you’re a legend…I’ll stay out of your way and help you get there! Keep going!

Well, back in the days of hardware denoising, a lot of them used recursive filtering.

I understand what happens in audio, where the output becomes an input. But that isn’t quite how it works in video processing.

I think this old school recursive filtering noise reduction becomes more like the way average works but keeps in mind the possibility of moving images which is would allow into its algorithms somehow. But I think AI & however Neat works supplants this method these days.

1 Like

Ok! Had this reply open for like a week but kept gettting distracted… I read a ton about denoising around the time I made the Ls_Dollface matchbox and mainly learnt that it’s really complicated and can’t be broken down into a batch tree most of the time. People have been trying to figure it out since the dawn of digital images and it’s still not perfect… here’s some ways I vaguely remember from oldest to newest:

Median - one of the oldest ideas beyond just a blur: sort nearby pixels by value and take the middle one, so extreme changes are ignored

Bilateral filtering/K-nearest neighbours (KNN) - look at nearby pixels and average the ones that are similar, ignoring ones that are different in value so are probably from a different object - like a blur that doesn’t blur across edges… this is kinda how the Dollface shader works but I had to cut some corners to keep it fast on the GPU

Non-local means - instead of just looking for nearby pixels which are similar this looks at whole blocks of pixels, 7x7 or so, and tries to find blocks which are similar in order to average them together, so it can take advantage of detailed repeating patterns that bilateral struggles with

BM3D - extends the non-local means idea to work with time as a third dimension, looking for similar blocks from previous and next frames to average together

Recursive filtering - hardware video DNRs used to combine a few techniques but the simplest was to average the previous output frame with the input, which works great for non-moving backgrounds but obviously leaves trails behind moving objects just like Compound does… they would have a threshold for when to ignore the filtering to preserve moving things, and the later ones had motion compensation so could figure out how each frame had moved and average it overlaid on its original position. You could maybe build this in batch, using Pixel Spread’s vector displace mode with a motion vector input to align each frame to the previous one. The earliest CG-specific denoisers did this using the motion vector pass but it’s not that effective on its own. At least one hardware unit had a median filter which worked across time as well, like the TemporalMedian node in Nuke which works well for removing sparkles and rain

FFT denoising - these work totally differently and are way harder to understand and write… the idea is that noise looks different to image data in frequency terms so you can split one from the other. The same way white noise in audio is constant across all frequencies, so looks like a flat line on a spectrum analyzer, noise in an image is similarly constant after you FFT it - wheras the image itself has a more lumpy and complex spectrum. Whenever you see something that needs you to sample a flat area to get a noise profile it’s probably doing this - calculating the spectrum of the noise from that area, then subtracting that spectrum from the whole image before converting it back from the frequency domain. Neat video even shows you a little graph of the noise spectrum. Wavelet denoising is the same idea but replaces the FFT with a wavelet transform

Neural/ML methods - an obvious thing to do these days is train a neural net on images before and after adding noise… I think the Optix one was trained like that. Interestingly I haven’t been that impressed with Optix compared to Neat for denoising CG renders - the main advantage of Optix is that it’s crazy fast. Would like to try the Intel OpenImage denoiser on real images some time, it seems more impressive in demo vids on CG renders

Alright sorry for the length :cold_face: Haven’t been keeping up with the latest ML papers so maybe Runway or similar already have something that’s on par with Neat Video? Would love to hear if anyone’s tried…


Thx a lot for the explanation. It stays a complex subject but it is still much clearer now with that overview of the different approaches.