Hello pros,
I’ve been working on a Python hook that brings text-based transcription to Flame 2026 on macOS.
I get a lot of work that requires subtitles, and hate hopping back and forth from Premiere (which has a really great built-in AI transcription tool for text-based editorial) back into Flame with .srts.
What it does:
Right-click any timeline segment → Logik → Transcribe Audio and you’ll have
- SRT file written next to your source media
- Timeline markers with transcripts text at the correct timecodes
- Flame captions track (via the native subtitle import API)
You pick the Whisper model size (tiny through large), language or auto-detect and the outputs you want.
The hook installs Whisper + PyTorch into a self-contained Python venv at ~/.transcribe_audio_venv so it never touches Flame’s Python. A one-time setup from the Flame main menu handles the install. Heavy lifting happens in a subprocess, results come back as JSON.
I also documented all the techniques used - probing undocumented Boost.Python APIs, dealing with Flame 2026’s timeline object model, etc. in the repo for anyone who wants to break it or build on it.
Requirements:
- Flame 2026, macOS
- Logik Portal installed
- brew install ffmpeg python@3.12
- ~2GB for the venv (one-time)
I don’t THINK something like this already exists, but let me know so I can tell Claude. Would love for yaul to kick the tires, especially on different hardware and with different media types.
Known gap: only tested on macOS, Flame 2026.2.1. If you hit an error, the error dialog shows enough detail to help debug.