ASCII batches taking long times to load

Hello. We often share ASCII batches to share between remote artists, etc. – rather than archives which bloat and still carry media when you don’t want it to. This has been very efficient for us, and normally when I would load an artist’s ascii batch, it finds the same media that I have locally and everything is fine (or if not I’ll manually connect a clip.)

However with a few recent projects, opening a small ASCII batch can take 4-5 minutes. The batches are tiny (maybe 20 nodes, under 10 MB.). The only workaround we have currently is for the artist to remove the media clips in the batch and replace them with a MUX of the same name, then I manually reconnect.

Has anyone else seen this? We’re on 2024.1.1, and this has happened on several projects. I don’t recall it happening on previous versions. Thanks in advance.

@Majush

Just loaded my first freelancer ascii batch in 2024.2.1 and it took well over a minute. Very simple setup, maybe 12 nodes.

@fredwarren, curious if you’ve heard anything else about this?

Never heard about it. Please open a case with the support team so they can investigate it.

We sometimes have those setups trying to wire material across from the original machine :thinking:

I’ll open a support too.

So in 2023 a batch would always come in unlinked, generally. Our typical freelance workflow is to export plates to a /shots/ folder so it’s a self contained instance on the network.

What I’m seeing in 2024 is it’s linking to footage when I load it… BUT there’s no File Location on the re-linked footage? The whole field is blank, even in Conform tab. So is it linking to my framestore, or linking to that exported plate in /shots/? When you watch the info tab loading a batch you can see it stall on each piece of media “initializing: xxx010, xxx010_plate2, etc…”

I have had some similar experience loading batchs (very very rarely) and the problem has usually when batch try to initialize a clip and it has problems to find or open the path. Terminal use to show some info.

Aaron I’m jammed on a deadline, so if you open the ticket, please feel free to reference/add me to it if you’d like. I’d like to try to figure this thing out.

check the Flame app and shell logs. It might show what it is looking for. In Flame 2017 something, there was a horrible bug like that which would cause our batch setups to take 45 minutes to open. Worst job experience of my life.

“Request can’t be completed. Sorry something went wrong while processing your request.” So need to file a support ticket about the support ticket system first

I tried and got the same.

Hey GPM, did anything ever come of this? I had two jobs that went fine, now I’m running into this again and remembered I hadn’t got the ticket system to work.

Will try and file another this AM. Edit: filed with support

Edit: they’re blaming it on OSX network settings. But I’m pushing back since 2023 worked fine. And I’ve confirmed this behavior with freelancers on their personal machines.

Edit: Autodesk tech was at a loss. We did an ton of network troubleshooting and nothing worked. Wiretap looking for a 192.168.100.48/gateway. It doesn’t exist.

“Wiretap looking for a 192.168.100.48/gateway. It doesn’t exist.”

this became a problem when i move archives from osx to rockylinux. same flame versions but diff networks and settings from home to office. constant lags and freezes and i did show this to ADSK support and they couldnt figure out what was going on either.

2 Likes

Today’s official reply “If you can make it work, more power to you, but if it breaks you’re on your own.”

It’s being blamed on Dropbox, not 2024. They mention “we have had cases for this workflow.” Which discounts 2022 and 2023 worked fine in this environment.

Case 21947654

Yeah the Dropbox theory does not hold up. I don’t use Dropbox, and I experienced this problem painfully.

1 Like

I tried logging a support case, but the site was down. :woman_shrugging:t3:

1 Like

Ok, did a 1:1 test, remote freelancer, 2023.3.1 vs 2024.2.1, brand new batch, same media off dropbox, same target save location.

It’s 1 second vs 60 seconds for a 4 node comp.

Sending logs to support.

Any idea if a newer version of Flame might not have this killer bug? I’m not even familiar with what is the newest.

2024.2.1 is the latest on Mac

Support ended my case this morning as “unsupported workflow”