Do any of you have experience or tips for deflickering AI videos?
The example is a small excerpt of a video from Comfyui.
We have already tried various possibilities with AI models, in nuke and resolve. Things are happening but it is not yet the desired result.
Maybe one of you has other ideas.
I also played around with the MlTimeWarp and made the video 50% slower and then timewarped it again by 200% with an offset of 1 to use the “newly generated images” that helped but not the way I would like to go.
I think this is inherent in the way ML videos are generated. The “style” is temporally consistent but the actual pixels are not.
The result would change drastically with the actual movement in the video but, you could try to extract depth with ML again and use it to displace a surface and re-project a single frame of this on top of it.