I’m getting an error when trying to render a ML timewarp, (in batch),
“could not perform inference”
Any ideas?
Graphics card is full or too small
2 Likes
Yup, running Flame on anything less than a 48GB GPU is fools errand today.
I think it was full. i restarted and it worked.
You can also flush graphics memory in the performance monitor instead of restarting.
1 Like