We had this battle with Google Corp a few years ago. When AI captions were still inferior to what they are today.
We created human corrected English captions through a vendor. We did a quick QC pass. We then made Google Docs out of them and sent them to client for proofing. Once they proofted them, we had them translated, made new Google docs and sent them to their regional teams for proofing. Then we took them back and baked them in via MediaComposer. I think we only re-did one file.
Convincing argument: you are the subject matter experts on your products and your very complicated brand guidelines on capitalization of your brand names and product names. You need to do the proofing yourself.
That worked pretty well for the most part. But after a while they got tired of proofing and we had to do the English ones, and then used the translated ones as is (their choice). But at least they understood what is involved.
Iāve also done the AI route (if there is no budget to send it out). My current preferred method is to send them through SpectraLayers which will generate an .srt file (and run locally, no Cloud processing). Also no cost other than the software.
I then load the captions and the video file into Subtitle Edit and QC them where theyāre easy to fix and visible in context of the video. Also get lots of helpful QC features there, some of which will autofix a lot of problems.
Then convert into whatever comes next - whether thatās .srt for Flame, .txt for MediaComposer. or .scc for Premiere (multi-stream, re-positioning).
AI captions do need a lot of fixing. But unless you have really big $$$$ itās too costly to get high quality human captions (and translations). Also takes quite a bit of time. Clientās should budget for both, but they donāt.
The old mantra: āDo more with lessā.
And thatās exactly what we should fear with all the AI based VFX tools. Once they learn that in 15% of cases we got lucky with an AI short cut, that timing and budget becomes the baseline, never mind that the other 85% donāt work out.