Hey everyone ![]()
I’ve been working on a little side project that I wanted to share with the community: a bidirectional integration between Flame and ComfyUI.
The idea is simple — right-click a clip in the media panel, pick a workflow, and Flame exports the frames to ComfyUI, processes them, and automatically imports the result back into your library. No round-tripping through Finder, no copy-pasting paths.
How it actually works
-
Build your workflow in ComfyUI using two custom nodes I made:
LoadFromFlame(which reads the frames exported by Flame) andSendToFlame(which drops the output into a watched folder). You build your AI pipeline between those two nodes — upscaling, style transfer, ControlNet, whatever you want. -
Save that workflow as a .json in a dedicated folder (
flame_comfy_workflows/). From Flame’s Settings dialog you can mark your favorites, which will pin them at the top of the right-click menu for quick access. -
Export from Flame: right-click a clip → ComfyUI → select your workflow. Flame exports the clip as a PNG sequence into a staging folder, then automatically loads the workflow in your browser with the correct clip already wired into the
LoadFromFlamenode. -
Run the queue in ComfyUI — the frames get processed and dropped into the output folder.
-
Auto-import back into Flame: a background watcher detects the new files and imports them directly into your library, organized in a timestamped folder (
CLIP_NAME_HHMM).
Two more things worth knowing:
First, workflows can run fully automatically — no queue button, no browser interaction. Flame exports, ComfyUI processes, Flame imports. You just wait for the result to appear in your library. The only difference is that you save your workflow in API format instead of the standard one (Settings → Enable Dev Mode Options → Save as API Format in ComfyUI).
Second, the LoadFromFlame node supports multi-clip workflows: just select several clips in Flame before triggering the export, and each one gets wired into its own node instance. Useful for any workflow that needs to compare or combine multiple sources — a background plate, a matte, a grain reference, whatever you need.
It works both locally (Flame and ComfyUI on the same Mac) and over a network (Flame on Mac, ComfyUI on a remote Linux workstation via Tailscale).
Fair warning: I’m not a developer. I’ve been building this with a lot of trial and error (and a lot of AI assistance
), so the code is probably not the cleanest or most elegant thing you’ve ever seen. But it works, and that’s what matters to me right now.
What I’m really looking for is feedback — things that don’t make sense, stuff that could be done better, edge cases I haven’t thought of. And beyond the technical side, I’d love to build a small community around this where we can share workflows, tricks, and ideas for how to use AI tools inside a proper Flame pipeline.
I’ve probably forgotten a few details here and there, so if anything is unclear or you have questions, don’t hesitate to ask — happy to help!
The project is on GitHub: GitHub - gasparmb/Comfy-Flame: A bidirectional workflow bridge between Autodesk Flame and ComfyUI, enabling AI-powered image processing directly from Flame's media panel with automatic result import.
Would love to hear your thoughts. ![]()
PS: I have used Flame 2026 to work on this, so I don’t thinks it work on 2025

