Is 2026 terribly slow because of the new database structure, or is something else going on?

Our projects are usually a minimum of 300 shots, and each shot can have 10s of Iterations. One of our current projects is over 700 shots. I’m just not going to take the risk of moving to 2026 and beyond, which we have already been bitten by more than once.

3 Likes

Being cautious, and seeing all the issues come up here, I’m not touching 2026 anytime soon. 2025 works fine and does not cause me problems.

1 Like

So we’re only marginally closer to an answer then?

For small projects things seem within spec, but the gremlins are still out there and haven’t been able to be reproduced by support, correct?

Comes back to the question - does the dev team have a test project that has a 100+ shots, iterations, and other history built up that can exhibit scaling problems in the code/IO layer? And are new versions, especially ones that make considerable change to the architecture validated on a project of that scale? To avoid having to unearth that in field tests?

I’ve provided everything over the years, and regarding this specific issue.

There is a hilarious video of one of the guys from MTL giving a presentation at some conference, man I wish I had the link still. He is up on stage presenting, boasting how they run 10 million (ok I exaggerated the number but was a ludicrous amount) automated tests everyday to catch bugs. Somehow in those automations they forgot to include, App Launch, Project Launch, and anything I’ve ever sent, including our template batch setup, that has over they years broken just about every function of Batch, and been at the center of I’d say 30 different support cases.

I’ve uploaded to them 10s of terabytes of Flame Archives, Batch Groups, and yes, Clip Library/S+W databases.

1 Like

Yes, your situation and effort has been well documented :slight_smile: Also trying to look beyond that.

Running automated tests is totally standard, and those numbers would be inline with what anyone with large app experience would expect. However, many of these tests are meant to test individual functions and re-running them constantly is meant to avoid regressions. So that is table stakes.

If these weren’t in place, you would be filing hundreds of tickets in each release for regressions.

You can construct very specific automated tests to check if critical functions scale without performance degradation. But that assumes you know where to look. As with anything insidious like this, we need to assume that it wasn’t expected to be a problem, or otherwise it would have been built differently or at least tested for.

This is why an automated test suite needs to be augmented with real-life worst case scenarios. Unfortunately they’re much more difficult to automate. That doesn’t negate their need.

The only other avenue, if you cannot maintain a real-world worst case scenario, is to put copious instrumentation into the code that measures critical performance thresholds and logs them, so at a minimum if users report problems, you have data to validate that with numbers from their specific uses cases. That may not be granular enough to pin-point the root cause, but at least validate the user’s experience and indicate that a smoking gun is somewhere in the theater. It might even tell you the section. And then you have to come up with a plan to find it still.

But at least you can put the user at ease that you have heard him/her, instead of getting stuck in a ‘have you filed a ticket’ routine.

That’s why I keep asking the question if such a real-world test setup is part of the release validation.

1 Like

The ultimate pursuit of productism is like carefully cultivating one’s own children, polishing a set of products. But the vast majority of professional managers find it difficult to achieve, and only a few founders have that kind of persistence and pursuit.

fwiw my users are also complaining about long autosaves and generall wonkyness with 2026

so far not so happy all things considered. lets hope 2027 is better .. or something.

Has anyone tried 2026.2.2? Among the ‘fixed issues’:

“FLME-70870: Saving and loading large projects takes more time than expected.”

-rj

Had to say that a mandated regular release schedule has always confounded me. Sure, it’s good to work to a deadline and all but I often wonder in software terms if companies release something knowing that there are going to be a whole lot of bugs they will need to fix that they haven’t gotten to yet. The attitude of any bugs that pop up or that we haven’t sorted can come in a point update. Not saying that the Flame Dev team has this attitude at all but I bet you there are times where they have thought to themselves, if only we had another several months to tidy up this release.

As someone who buys software, I’d much prefer to wait for a release that is exciting, polished and that just works. In fact, I would suggest it is even more exciting not knowing when it will drop.

Unlike hardware, you’re not going to delay purchasing software on a subscription model until the next version is released. You will use whatever the current version is knowing you’ll upgrade to something even better when it is ready. People are more likely to buy your software when the releases are incredible and your reliability is leading.

2 Likes

Corrupted a whole project, including archives (WTF) stuff is slow and loads for ever.

I now have to talk to a AI to get support.

this is extremely unacceptable behavior on all accounts and is causing us to look bad in front of our clients.

how can something like this be released, 2 whole point updates in , multiple hotfixes and servicepacks and we are still dealing with this mess?

remeber me going completely bonkers hunting down MY NFS/Network setup, buying a bare-metal NAS to setup a SANDBOX to figure out what was happening before new years? yea now we know it wasn’t me, it was Autodesk not testing their software.

Regardless, what can we do to make this better? do we need to throw more money at ADSK? should we all pool together and build a new opensource crowd funded finishing software ? Throw money at Mistika?

1 Like

We’ve tried to transition to 2026.x twice, and had to rebuild both projects back in 2025. Not gonna happened again. It’s like a turkey peck, right in the balls.

2 Likes

yea learned my lesson will be staying a 2025 and focusing on transitioning to a different software for finishing down the line, i had enough of this .

for example right now ive got a pretty simple commercial project that takes ~ 6 minutes to open and over 1 minute!! To Autosave.

2 Likes

Haven’t they gone the same way as Assimilate Scratch and focusing on immersive and live software rather than finishing tools? Workflows is an interesting tool by SGO but also looks similar to Scratch media export tools.

Thought Resolve had killed most options simply down to price?

Compared to Flame, Mistika probably doesn’t have much advantage anymore. Mistika Ultima/Boutique still doesn’t have any native AI capabilities, and Mistika doesn’t have a powerful 2D/3D node based composting system. OFX compatibility is very poor, and it doesn’t have a strong ecosystem. At present, DCP packaging is the highlight, while Mistika Ultima’s multi graphics card support is slightly better than Flame. So we still have greater hope for Flame. We just hope Flame won’t disappoint everyone in the future. There is no need for Flame to become the second Davinci, but it is necessary for Flame to learn from Davinci’s strengths, and it is also necessary for Flame to learn from Baselight’s powerful color grading tools.

1 Like

Yep.

ah too bad, time for a new competitor on the finishing market … sorry but as much as i love resolve it doesnt cut it.

@FriendsFromBorisFX cant you buy misika and make it great , you are missing a finishing app in your portfolio lol.

1 Like

just adding this here as it seems toi help with save and load times ( got it from support)

the project metadata storage (project path or whatever its called) should also run DLmpd as it optimizes these NFS writes (groups copy and link operations)

now mind you this still isnt “fast” as local by any stretch of the imagination but it helps, so if you have centralized project folders on NAS on NFS, that server should run projectserver or bascially be a linux flame and not just some “random NAS” .

note that i started with this but ran into other issues in 2026 and 2026.1 , seems ok to do that in 2026.2.2 tho , especially on larger projects … i wish the documentation would say so.

so we have to run a file server on top of our file server. Sounds reliable. Strangely no other part of our studio needs to run its own file serving application, on top of NFS, which wildly enough, serves files to every other application known to man, just absolutely fine and fast.

Additionally, in my previous escapes in working with dev to fix this issue, installing DLmpd on our server made ZERO difference to the fuckery.

1 Like

i also thought the whole point of seperating project metadata from the project host(database) server was that it was better .

Apparently not , then I dont understand why we even have the option to move it to arbitary NFS locations,

I first though “ok cool so i can store the project metadata inside of my actual project directory on my nas, makes archiving easier as everything is now at one place thats related to the project” but then I found out you have to use “sync” NFS exports , so that was a bust.

So.. i wonder whats the point of this very prominent option?

Especially considering that to reconstruct a flame project you need

A) the project server database

B) The project metadata folder

C) the framestore/MediaCache folder

thats super easy to manage and not a problem at all. Snapshot all 3 at the same time, save them somewhere and pray you never have to roll back.

the design and testing is all done without consideration of scale or studio wide deployment. Everything is still oriented towards Data Island. That epoch is un-acceptable today. There is no benefit to the new paradigm, when operational stability and flexibility is sacrificed.

1 Like