I’m doing some speed ramps on 4k material. Seems GPU is a no go. Using CPU instead. What is the gpu card you’re using to be able to render out 4K?
$8,000 will do ya.
That is the normal course of Flame…. when something is working in-efficiently, just throw more expensive hardware at it.
The P6000 is not supported in 2027 and is only minimally supported in 2026.2. Save your Christmas money.
You gotta provide a bare minimum of hardware details as far as what you’re working with.
that aint a p6000, nvidias naming is trash, its loke 4 generations newer.
I was going to say get Twixtor but the ML feature has a bug on the Linux version so it’s disabled for the time being.
Get a Mac, deliver and then return it… LOL
P6000 = Pascal = 2016
RTX 6000 = Turing = 2018
RTX A6000 = Ampere = 2020
RTX 6000 Ada = 2023
RTX 6000 Pro Blackwell = 2025
?. The P6000 pascal is EOL. Not sure what the name has to do with it.
i thought you commented on alans link maybe not ![]()
Finn and I both thought that you wrote P6000 is not supported replying to Alan, and we assumed you thought P6000 was the same as the 6000 Pro, which, makes total sense, but is of course incorrect.

If I had done that I would have used the little reply button on his comment. Not sure if you’ve noticed, but when you do that, there’s a little arrow pointing to the member’s icon in the upper right of the reply. But seriously, do you think I’m that dumb? Aside from the fact that I know the difference between a P6000 and an RTX 6000 pro, I’m not nearly stupid enough to challenge Alan on techie shit.
But the where did the P6000 comment come from, i guess that lead to our confusion
nobody said anything about a p6000
It’s pertinent to the whole thread. It’s why the OP can’t do ML speed ramps on the GPU. Also the fact that Alan mentioned the latest recommend as well as the fact that we need to continually upgrade our hardware to keep up with Discreet. I thought it a good time to re-enforce the fact that all of us who still have the P6000 need to make an investment if we want to upgrade to 2027 in April.
Are people still using talosh’s time warp? Didn’t need much additional GPU memory. Wish I had it now.
You can switch to CPU mode if you don’t have a lot of VRam. It will crawl but get there.
Here.
There is a tip with talosh’s TWML. If you run out of vmemory, and you don’t need much additional, you can set the clip’s tw, exit flame, decrease the vram reserved for flame in flame setup app, lets say 15%, start flame, (maybe flame run slow and laggy) and launch the hook. It will run with more Vram free, and probably you can render the tw. Restart flame reseting the original value of vram reserved in flame setup.
How about a way to purge VRAM whilst in the app?
The point is flame uses a portion of vram room itself, no matter if you purge vram. Reducing that by starting flame with this tip, you can claw back a bit of VRAM, just like Hengy says he needs. It’s the same for me. Doing this allows me to use TWML for 4K.