I’m trying to do a ML respeed at 50% in batch, but all I get is one frame rendered, with an error: Could not perform inference, 2025.2.1
Any ideas?
Check your gpu usage… exit out of the software log out and log back in to clear it out perhaps.
If you haven’t already, start Flame from terminal. There are usually more useful errors.
What resolution is the clip and what GPU are you using?
Material is 4608x3164, the machine I’m using is a AWS box, so not sure of the spec
If you’re using the GPU version of ML Timewarp, at that resolution I think may need 24GB of GPU memory. If you look in the resource manager bottom right, it should tell you what type of GPU memory config your AWS system has.
You could try to switch it to CPU mode for ML Timewarp if memory serves right, but it would get quite slow.
I have 22.49 available, so a 24G card