Multiple Neat Video nodes killing performance

I have three Neat Video 6 nodes in my batch at various stages of the setup. I pre-render the first stage with one Neat node out to EXR. Relatively quick on my old hardware:

Stage 1 pre-render: 4 minutes

I then import those rendered EXR files into my batch using the read file node and feed that into the next stage of my batch and render that stage out to another EXR sequence:

Stage 2 pre-render: 7 minutes

I do the same import and connect the rendered EXR sequence for the third and final stage of my batch, again with one Neat node in it.

Stage 3 render: 12 minutes

So, my total render time for the batch is 23 minutes. Plus the time and overhead of managing the pre-rendered EXR files, which is not insignificant and a real drag if you have several of these setups to process.

If I use the same setup but don’t pre-render anything, just connect nodes like normal (again, a total of 3 Neat nodes, none of them directly connected to each other)… Flame is estimating 16.5 HOURS after rendering the first 7 frames and the time keeps climbing after each frame rendered. Woof!

It looks like I’m only using about half of the VRAM on my GPU, so it shouldn’t be that. I could almost understand if it was 20% slower. Hell, even 50% slower is maybe acceptable if you want unattended processing bad enough. But less than 30 minutes to many HOURS? Something seems wrong.

Is this an OFX issue in Flame or something special with Neat? I don’t have any other OFX nodes to try with. Anyone using Neat with Nuke and see this kind of performance issue if you use multiple neat nodes in a script?

I’ve seen a couple of topics with a similar issue but one was mentioning color shift issues with multiple neat nodes in a batch. I’m not seeing any problems with the output, it’s just extremely slow when the one Neat node depends on the results of another Neat node.

-Matt

Linux, Flame 2022.1, 128 GB RAM, M6000 (12 GB)

You are using a 4 major version old software, on an un-supported GPU. It would be interesting to see what the rest of your hardware/OS is, but it would seemingly be as old and un-supported.

I knew if I put out the Old Kit Bat Signal you’d show up. :slightly_smiling_face:

That’s fine. I get it. My shit’s old. Fair enough to take some ribbing for it.

But… Does that mean users with newer hardware and newer versions of Flame don’t have this problem? I’m not above being shamed into buying newer equipment if it can be shown that the age of my setup is causing the issue.

1 Like

for all the reasons.

What number of frames are we talking about? Is Neat using the gpu? (I’m not familiar with 6, but in 5.x you can have it figure out best performance by having it do some tests..) … Also, curious, why do you need to apply Neat 3 times?

Neat Video use between 3 and 10 frames for it’s render ( 5 by default iirc ( current, 2 before, 2 after ). So chaining node like this without caching ā€œOn Renderā€ active on those nodes mean that the time required to process the setup is not an addition of all 3 separate time but a multiplication of the times.

4 Likes

Correct. I can’t remember the last time I even prerendered a Neat Video pass.

2 Likes

Also note that ā€œFLME-59970: Rendering a Neat Video effect takes a longer time in Flame than in other applicationsā€ was fixed in Flame 2023.

3 Likes

And consider upgrading to Neat 6. Flame on 2025 benefited from its optimizations more than any of the other apps I tested. It almost doubled the performance.

A test clip on 2025.2.2 w/ A5000 GPU took 9m43s on Neat5, on Neat6 it was done after 5m53s.

5 Likes

Thank you all for the replies. A colleague with access to the same source material is going to test my batch on modern Mac hardware and let me know how it goes. I’ll report back when I know more.

1 Like

I’ve experienced this with any GPU heavy processing. Having lots of paint nodes will do it also. However, why not cache each pre-render, rather than writing an EXR and reading it back?

1 Like

To some degree they’re equivalent. The EXR sequence has the benefit of being persistent and not subject to caching gotchas. Also gives you the ability to version (when using openclip), and makes them shareable. Disk usage should be similar.

1 Like

Yes, I’ve been bitten by losing a cache on a node so I use that feature sparingly. Also, in this case, I was running low on space on my framestore so rendering EXRs to a NAS with more space allowed me to keep going without having to take drastic measures like archiving and cleaning up after myself. :slight_smile:

1 Like

Does there exist a python script that would allow for rendering a write file node and then importing the rendered frames when it is done? I gotta think some clever person around here has made a script like that already. AE has a feature like this and it can be nice in certain situations.

I use cache in the sense that you render to the S+W, not the batch caching which can be spotty

Ah. Gotcha. I was doing exactly that in my earlier test runs with this setup but then the disk space started going down fast as it tends to do when iterating over and over.

If you render an open clip it will automatically be imported to the batch render shelf (if the option for that is enabled - by default). Then you just drag it back into the node tree.

1 Like

I’ve never had the occasion to work with open clip so this will be excellent motivation to learn. Thanks for the pointer!

Great. I’ve set mine up and then dragged it into the user node bin, so when I need one of these, I just drag it out of the user bin and it’s all set. Using <batch_name> and other tags will automatically name the files correctly (assuming your batches are named after shots, etc.).

1 Like

Everyone has their own recipes for this, here’s mine if it’s of use

To the questions above - of note is the ā€˜Create Open Clip’ and ā€˜Add To Workspace’ toggles. This is what will re-import them automatically.

And then building your pattern with tokens, and setting your media path.

1 Like