Hidden Performance Traps?

Sometimes seemingly innocuous comping decisions can lead to disproportionately increased render times.

One I found was using Region of Interest on a non-keyframed gmask in the gmask node. This has consistently doubled my render time over taking a mux freeze frame of the same gmask node and putting it through a comp node to simulate Region of Interest. Maybe a bug that needs reporting?

Anyone else have any similar tips and tricks for avoiding killer render times?

1 Like

I used to work on a very under spec’d machine and I was very conscious of this.

It was mostly titles that I was refining but I would religiously mux freeze any text I had typed as well as key framing my motion blur on and off when movement had stopped.

1 Like

The biggest thing I see that causes poor performance is the overly aggressive use of Anti-Aliasing.

Unless you are building and rendering CG in Flame, it is very likely that you don’t need Anti-Aliasing at all. Rather, you need to actually understand the Surface Filtering algorithms and FILTER YOUR SHIT properly. I see it all the time…screen comps with Anti-Aliasing jacked all the way up when EWA + Linear and a 2 pixel blur and crop softness set to 4 is all you need.

Plus, Anti Aliasing softens every surface. Boo.

4 Likes

I am frequently guilty of reaching for AA when I don’t need to, so I will also add that any performance hit you take with AA is MULTIPLIED by your motion blur samples. So, 20 samples of motion blur plus 16 AA samples means that each frame is being calculated (by action) 320 times. Murder.

Or a nice lunch break.

Check the motion blur settings as well. Also double check that you don’t have it animated.

Also, check you other nodes in batch for surface filtering problems like 2D transform, resize, etc…

Yeah, if you can substitute motion blur settings in Action to a Stingray_Mblur matchbox node in Action you should definitely do that.

Recently I had to change some comp from a coworker
It was an Action with a very large grid and an almost infinite zoom out to reveal 1000s of images. It took 20 min to render. I changed it to Stingray, and got it down to 1 minute render time.
Of course I checked it to make sure it looks the same. But in this case it was worth the change. :smiley:

2 Likes

I regularly punch out an HD portion of a 4 k frame, to speed up a Denoise or use of motion vectors.
also using proxy render and adding this back into the batch (rescaled back up) can work well .

Years ago when action had DVE or was that smoke, maybe fire, anyway we had a job just finish on the box with renders that took about 20 minutes each. The job was archived and at that tine we upgraded the software to a new version. This new version changed all the dve to source and matte nodes. We had to revise the same job from the week before (the one that had 20 min renders) and when we rendered the revisions the times went to 2.5 hours each. We freaked out as the job needed to be a fast turn around. Adsk made a special cut of the software that resolved the issue. The problem was that all the layers in all the comps switch on motion blur even though it looked switched off.

1 Like

Agree with @randy. Definitely definitely definitely use ewa anti aliasing where possible. Most jobs don’t need more than 4pixel ewa.

And with @AndyG. It’s very useful. However, be careful with this using action. I once found a delicious bug which offset everything by a pixel or two when using a chain of actions to do this.

In general action tends to offset by half a pixel. I’m pretty sure this is still the case but I’ve just learned to accept it and move on.

Also with bicubics use the uv to split the area you need to stretch. I picked up an old school ops setup with the bicubic split into 100 segments to stretch the edge which massive slowed rendering down.

And render out painted frames. And recursive ops. And mattes. And clean plates.

2 Likes

Batch Paint set to sequence on a UHD+ clip can get murderous with a lot of strokes. If I’m just stabilizing a phone screen to paint out montiors, I now throw center/crop resizes before and after the paint and knock it down to 1080 or even 720 if I can get away with it. It helps a great deal!

“Delicious bug”
Love it…

Over the past few versions of Flame we’ve come across ‘ADPClientService’ taking up 80-100% CPU on machines (offline macs) and slowing down performance within the program greatly. In the past, going into the library subfolder jungle and deleting the ADPClient file had been the fix.

I noticed this file pop up AGAIN a few weeks ago after an update and have not been able to delete the file this time. I got on the horn with Autodesk support and they said the new fix is to untick the ‘I Agree’ option under HELP > ABOUT DAP… very simple fix, and I’ve noticed such a performance gain (my flame was so sluggish while this was toggled on. I was getting the beachball after pretty much every move I made within the software).

Not sure if anyone else has run into this issue. The Autodesk rep said the fact that the production machine is offline only made it worse, because the background service was trying to phone home when it couldn’t connect and was getting hung up.

Such a simple fix for such a goofy thing. Flame should really prompt the user to agree/disagree to sending usage data when they first install and open a new version… like most other NLEs…

This was reported during beta cycle of 2020 awhile back, although we saw it on Linux.

Hi @nick_devivo,

Thanks for sharing.

The situation you described occurs when the ADPClientService is running on a workstation with no Internet connection. Under normal conditions, the ADPClientService is very light and does not impact the Flame performances.

You said:
Flame should really prompt the user to agree/disagree to sending usage data when they first install.

It is the case. Flame prompts the user to agree/disagree to sending usage data when the application is installed for the first time on a workstation and carries this setting to subsequent Flame installations.

As you said, users can disable sending data at any time in Help - About DAP.

As a side note, the usage data collected is minimal and is helpful to understand our users. This data significantly helps in making a better product.

Please, let me know if you have any questions.

Regards,
Yann

2 Likes

@YannLaforest thanks for the reply. I have never been prompted to agree/disagree to usage data, but if the setting carries over to subsequent versions of Flame, it’s a possibility this was agreed to by someone on said machine in the past (has to be the case, right…?) I will keep an eye out for this in the future. Thank you! Our machines are always left offline due to TPN requirements.

Hi @nick_devivo,

Yes, it has to be the case.

If the prompt does not show when installing Flame on a new workstation, please let me know and I will follow up on it.

Best,
Yann

1 Like