Flame 2026 + AI: real production breakthrough… or still too unstable for high-end workflows?

I’ve just installed Flame 2026 for a production and started testing the ML tools alongside my usual workflow.

I mainly work in advertising, with strong constraints around clean-up, compositing, and especially shot-to-shot consistency.

So far, my impression is quite mixed:

  • ML tools are impressive on isolated use cases

  • But when it comes to shot-to-shot consistency or complex shots, I’m still hesitant to rely on them in production

I still have the feeling that:
→ results can look great on a single shot
→ but aren’t always stable enough to guarantee perfect continuity across a full sequence

I recently discussed this with a Senior Product Owner at Autodesk, who confirmed that stability and consistency between shots are still key challenges, even though things are evolving quickly.

So I’d be really interested in your feedback:

  • Is anyone here already using Flame’s ML tools in real production?

  • On what kind of shots do you trust them (or not at all)?

  • Have you managed to maintain solid shot-to-shot consistency using these tools?

  • Has anyone already found a way to connect an LLM directly to Flame?

I’m also wondering if the next step shouldn’t be a true sequence-level intelligence, or even an integrated LLM, rather than shot-by-shot tools.

If some interesting insights come out of this, I’d be glad to share parts of the discussion as well, always useful to bring in a broader range of perspectives.

Thanks for your time,
Alexandre Rouanet
https://linecraft.fr/en

These are issues we’re all working through.

I would suggest you watch the the first episode of the fxphd ML102 course, where John Montgomery gives a very good overview of ML tools in VFX. There are also other sections in this course we have more comprehensive answers to your questions. Doug Hogan has also spoken about this in classes extensively on fxphd, on Discord, and on ActionVFX. These are behind a paywalls, but worth the investment in my opinion.

Underlying all neural networks is a fair amount of probability and statistics. An AI doesn’t know the answer to your question. It thinks with 95% probability that the answer it will give you is the right answer. This is in direct conflict with your premise that you need 100% predictable and repeatable results, which can only be done with classic algorithms, not neural networks. Your expectations should thus be adjusted.

There are different classes of AI models, some of which are generative, and some of which discriminative models that fare differently in terms of consistency. Some of it depends on the way you engage the model. A standard prompt interface will usually use a random seed, helping you get a variety of answers hoping one is close. If you have more controlled interfaces like ComfyUI you can change some of that behavior.

Some models are architecturally designed for image processing, not video processing. They were never meant to have multi-frame stability, or temporal consistency.

So the answer to all of this in my mind is: AI is not a ‘lift and shift’ situation, where the new tools will replace old tools but work the same, just faster. AI requires us to rethink our workflows so we can harvest to work faster, yet maintain the quality our clients expect. These are new ways of working, and we’re still all figuring it out.

Regarding integration into Flame - that’s an open question many are asking. It’s unrealistic to expect ADSK to provide more than a some fundamental tools inside Flame. Their development cycles are fundamentally too slow relative to how fast AI is evolving. That means most meaningful AI tools will live outside of Flame and will have to interface. There have been attempts, documented in this forum, to use OFX to automate such interfaces. It’s an evolving landscape, and I expect it get better. But it will be community driven, not ADSK answers I believe.

For context, the first conversations on how VFX artists can harness AI tools more seriously really only surfaced less than a year ago at scale, with a few pioneering approach going back a bit further. Since then things have evolved at lightning speed, and have been regular conversations at Flame User Groups (in Berlin AI probably accounted for at least 50% of the presentations and discussion).

PS: LLM is specific to what ChatGPT or Claude uses for text based answers. The AI models used in VFX have different architectures and are not ‘Large Language Models’.

1 Like

Thanks a lot for your answer, really interesting.

What you said about probability vs determinism really puts words on what I’ve been feeling in production lately. You can tell ML is very powerful, but as soon as you’re dealing with longer sequences and tighter constraints, it becomes much harder to control.

I’ll definitely take the time to check out the resources you mentioned especially the ML102 course on fxphd and the talks by Doug Hogan. Sounds like there’s a lot of valuable insight there.

So I’m realizing that my main question right now is really:
how are you actually integrating these tools into long, demanding workflows?

At the moment, I tend to use them more in a ponctual way:
→ to unblock a shot
→ test a direction
→ or speed up specific tasks

But I’m still struggling to see how to use them reliably across a full sequence without losing control or consistency.

So I’d really love to hear some concrete feedback:

  • Do you have methods to “frame” or control ML within a pipeline (versions, seeds, intermediate passes, etc.)?

  • At what stage of the workflow do you integrate it (early, look dev, or finishing)?

  • Do you combine ML with more deterministic tools afterward to secure the result?

  • And most importantly: how do you handle shot-to-shot consistency across longer sequences?

  • And if most of this lives outside of Flame, how are you actually handling that integration within your pipelines?

I feel like we’re all still figuring these workflows out, so any real-world feedback would be super valuable.

I think the answer is different for different workflows.

Generally speaking, AI is not able to handle a shot in it’s entirety. But rather it can help speed up specific steps in an existing workflow and speed them up.

That was the big breakthrough last year - getting away from prompting for the final result, and instead applying AI to existing VFX pipelines to specific elements or processes. Also one reason why Flame artists will likely benefit rater than suffer from AI for the time being.

Simple example: The new AutoMatte tool does a very respectable job at masking elements. But it is not 100%, and it will do great on one shot and fail on another shot. So you cannot rely on it as the only way to create precise production mattes.

However, it already is very suitable for creating garbage mattes. It also frequently can get 95% of an object masked, and then you to augment the remaining 5% with a gMask. The end result is still a lot faster than if you had masked the whole object the old way.

Same for cleanup tasks. You may find that an in-painting model will do well, but fail in one particular region which has to be manually done. Or maybe an initial AI pass will catch 50% of the cleanup, where other tools which failed on the original, they can suddenly succeed.

It’s all about finding these combinations, which are specific to workflows. I’ve had good results with object removal in many cases. AI can also be very good at generating elements you can then composite into a shot. It can also be used to control existing assets for your shot. Here are two examples I documented recently.

One important caveat is, that if your schedule now assumes that AI will make you faster, you have keep in mind that AI can spectacularly fail in unexpected ways. So you need to leave enough margin in your schedule, to account for AI not delivering and you having to do it manually.

  1. Yes, there are ways to exert more control. There are limits, but more than just prompts.
  2. Wherever it makes sense. Great for look dev, but also important for finishing
  3. Yes, always combine ML with classic tools
  4. Only apply AI if the the answer of 1) - 3) provides a consistent result. If AI is unable to deliver a consistent result for shot 035, only use classic tools for that shot, then AI for others. You are still in charge as the artist.
  5. Round trip EXR or PNG sequences through external tools, just like other external classic tools. Over time OFX integration may appear, but still experimental.

Also be mindful that many AI tools can only handle Rec709 footage, short clip lengths, and limited resolution. Some of this can be overcome. But in the days of Camera RAW, 17 stops DR, HDR, etc. AI is a return to the stone ages, and you have to have handle it accordingly in the pipeline.

There are starting to be efforts to handle linear gamma, bigger color spaces, etc. But this is often limited to specific models and specific workflows.

TImewarp ML is great, as is the Morph ML, except for the fact the Matte input is calculated totally differently than the RGb, thereby becoming absolutely useless.

If only the timewarp ml would output a coherent MV pass for use on fg and mt.

This has come up numerous times since it launched. Did anyone ever file a feature request (or possibly even a bug) for this?

I reported it during Beta, and the reply was basically, “Yup, it’s broken and will remain that way.”

I remember that. Not a good moment.

sums up most ML generated tools. :confused:

To answer the original question, I mostly avoid them. If it isn’t the exact right solution, it’s shit at adjustments and tweaks. My whole job is adjustments and tweaks.

1 Like

I use ml matte tools somewhat frequently for intermediate steps but not as much for final pixels. boris’ MatteML in mocha and syntheyes is great for quickly generating occlusion mattes when tracking, for example.

the problem with a lot of the ml tools for me is that the things they get right immediately are generally things that wouldn’t be very hard or time consuming to do manually, whereas the tricky things where I find myself wishing for something a bit more automatic are where they’re no use. sometimes it’s pretty useful to save that few minutes on the simple stuff if you have a bunch of it to do though.

We’ve been using SammieRoto 2.3 a ton recently and its produced some bonkers amazing mattes, but we use it externally, not thru Flame.