Holy smokes… amazing work..
That is super awesome.
It will be, eventually—hopefully.
Since last night:
Coordinate space fixes:
1. Disk-vs-segment resolution correction — cli_solve detects when disk file dimensions differ from Flame's seg.source_width/height for both source AND ref, rescales
solver output: eff_scale = scale * src_rx / ref_rx, eff_tx = tx / ref_rx
2. Ref resolution mismatch — ref MP4 was 1920x1080 on disk but Flame reported 960x540, making k=2.0 and doubling all position/scale values
ffmpeg fixes:
3. ffmpeg/ffprobe resolved from conda env bin dir — _resolve_bin() in both extractor.py and the hook, instead of relying on system PATH
4. ffmpeg installed via conda-forge — part of install.sh, no longer a manual step
Install/uninstall rewrite:
5. Default standalone env forge-cv (no projekt-forge dependency)
6. Global deploy (/opt/Autodesk/shared/python) is the default
7. --deploy-only mode for fast redeploys (copy + pycache clear + diff verify)
8. Deploy targets persist in ~/.forge/config.yaml
9. SuperPoint deps (torch+lightglue) offered interactively
10. opencv-python conflict auto-fixed after pip install
11. Platform-aware ffmpeg hints (dnf/apt on Linux)
12. Uninstall scans filesystem for all hook copies + detects stale forge_cv_timewarp
13. Fixed bash ((DEPLOY_OK++)) under set -e and empty array expansion under set -u
Other:
14. pyproject.toml — [superpoint] optional extra
15. Hook debug logging — baseline Action values + injected values
16. Hook fallback env names — tries forge-cv before forge
…as well as now supporting Quicktime source segment matching. Progress.
One
Man
Army
Obviously, we do have a team of QA testers within the development team and also have automated testing on top it. The benefit of having users do beta testing (I’m not sure why Finn is saying that one must pay to do it. This is not true.) is that it impossible for us to test all the different hardware + workflows combinations. It was never possible and never will be.
And The User will use the software in mysterious ways..
Fixes this morning, which when validated will basically signal a general release:
Timewarp frame mapping (cv_align + cv_timewarp [wip])
- get_timing() / get_speed_timing() were being called with absolute record frames instead of 1-based segment-relative frames
- The delta formula (timing_at - timing_base) collapsed the timewarp offset to zero on the first frame, causing wrong source frames to be read
- Missing head (handle frames) correction — timing values are 1-based from source start including handles
- Used round() instead of int() — Flame floors fractional timing values
- New formula: disk_frame = int(src_in - head + (timing_val - 1) - frame_offset)
- Verified against three test clips with forward, scaled, and reverse frame-based timewarps in both Timing and Speed modes
Installer
- SuperPoint deps (lightglue) not on PyPI — changed to install from git+https://github.com/cvg/LightGlue.git
- pip uninstall opencv-python nukes shared cv2 files, leaving opencv-python-headless broken — now uses --force-reinstall to restore
- Bash syntax error on empty PROJECT_PATHS array with set -u — fixed expansion pattern
- Validated clean installs on macOS and Linux
Fingers crossed.
Chris this is incredible! are you able to select an entire layer in the timeline and have Forge tear through all the shots at once? ….and did I understand your video correctly that it added an action node with keyframes for position?
Yessir @TimC . Clips with below 30% confidence will do nothing. Above will just roll on to the next. At the end you get a report for each segment you tried with. Regarding how it’s deployed to the timeline, it creates and action timelineFX and a keyframe for each reference match. So for first it’ll be a frame on the first frame. First and last will have a keyframe on first and last, etc etc.
Not the best material as some shots were changed relative to reference to use existing shots and all 5k source which takes a little longer to register:
my point is that the amount of bugs making 2026 almost unuseable makes a paid user feel like a beta testers, case in point that whole NFS disaster that corrupted almost all our projects… sorry but i am a bit sour with all this, we keep hitting so many bugs every day but i dont even have the time to report them, business must go on, i try to not open flame anymore because it makes me angry.
take my money! been trying to get myself to make this tool. you legend
Hello.
I made it in Nuke for autolineup.
But not for all frame. Just for where I park, and no retime search.
But it is working well
Here is the base if it could help.
aligned_plate.py (7.7 KB)
franck
That’s so cool @lambertz
I flipped my repo public. Feel free to take it for a spin and a huge thanks to the folks that tested especially @kyleobley who was relentless in tracking down bugs. Thanks again for all the help!
A word of warning. It only works for folks who are working unmanaged for the time being. It’s the only way to fly.
This is a very good solution. Regarding the use of machine learning to match transformations, as Baselight is currently doing, I would prefer Flame to integrate this feature internally. This can make the most of Flame’s internal Python environment and build AI environment and models within Flame. Flame has already integrated many ML functions, and I believe that implementing ML matching transformation should be no problem.
chris dude.
Duuuude
respect
did a search in general improvements to upvote this, but can’t find it.
Mind providing a link?
Autodesk Feedback Community (Jul 28 2015)
upvoted
This is actually great… thanks man.
Very interested to see your approach, I see you have gone the feature detection way… ![]()