I’ve just installed Flame 2026 for a production and started testing the ML tools alongside my usual workflow.
I mainly work in advertising, with strong constraints around clean-up, compositing, and especially shot-to-shot consistency.
So far, my impression is quite mixed:
-
ML tools are impressive on isolated use cases
-
But when it comes to shot-to-shot consistency or complex shots, I’m still hesitant to rely on them in production
I still have the feeling that:
→ results can look great on a single shot
→ but aren’t always stable enough to guarantee perfect continuity across a full sequence
I recently discussed this with a Senior Product Owner at Autodesk, who confirmed that stability and consistency between shots are still key challenges, even though things are evolving quickly.
So I’d be really interested in your feedback:
-
Is anyone here already using Flame’s ML tools in real production?
-
On what kind of shots do you trust them (or not at all)?
-
Have you managed to maintain solid shot-to-shot consistency using these tools?
-
Has anyone already found a way to connect an LLM directly to Flame?
I’m also wondering if the next step shouldn’t be a true sequence-level intelligence, or even an integrated LLM, rather than shot-by-shot tools.
If some interesting insights come out of this, I’d be glad to share parts of the discussion as well, always useful to bring in a broader range of perspectives.
Thanks for your time,
Alexandre Rouanet
https://linecraft.fr/en