Back in the sgi irix days, I recall altering the size of the mem token (or maybe it was called something similar) in the cfg file, if i had a very large number of layers in batch, and it would make performance much better, but you just had to save very regularly as it could cause crashes…
my current batch has a tonne of layers, and is getting to be very laggy just even connecting nodes…is it stil possible to edit this memory parameter in the config file?
It is likely your GPU ram getting exhausted. Check either the Resource Monitor or nvidia-smi. Or just reboot your machine, is the interactivity normal again?
but doing a manual stadium crowd…so I guess the buffers are filling up fast!
so yeah gpu keeps maxing out and performances increases a bit when flame is restarted, but doesn’t last too long before needing to go again. 24gb GPU…
I’m pretty sure it was a similar issue with the old Tezro and Octane graphics processors maxing out with such comps…but the mem token was very helpful…obvs a totatally different architecture now, so just wondering if there was a similar hack.
thanks, yeah…but will have to be seperate batches which is a bit frustrating as there is some amount of codendency and referencing across them needed which will also make workflow a bit painful…but yeah thats the only workaround I can think of too!
The last time I did a stadium crowd shot I pre-cropped my sources so that I was not pushing a 4k source through the render when I only need 1/5 of the image. Other than that, lots of layers is going to start bogging you down.
yeah i have cropped them but with 1 to 1 pixels still…if i scale down too, then i’m getting double filtering then in resize/action…no matter which filtering I’m using and even though I’m scaling down in the comp, its a long focal length broadcast lens…intentionally that look, so the filtering does lose too much detail…
no worries I’ll struggle on! one of those where you just have to accept you hit the limits!
thinking about it…this was the beauty of those old boxes…relatively speaking of course…those sgi machines were built almost as if the whole system was a gpu, as opposed to now a gpu running on a mboard!
ha! sorry for annoying nostalgia…but shows how advanced they were back in the day compared to anything else around that time!
This stress-testing screen shot is from 8 years ago, tragically on my birthday… flame, sigh…
(But from my garden, from a laptop, with a giant glass of something alcoholic)
This is 512 unique layers, about 314,000 unique frames of uncompressed 6K OpenEXR files, being run through batch, on a z820 about 1,500 miles away.
It had no internal storage, only a K6000, and a 40Gb ethernet connection to the storage.
The same batch died on 64 layers of R3D 6K compressed RAW files.
The same batch died on 128 layers of transcoded OpenEXR files with any kind of compression.
The 512 layers of uncompressed files didn’t crash and could probably have scaled more but I didn’t have time.
Batch is amazing.
Flame is amazing.
NVIDIA GPUs are amazing.
Oracle Storage is amazing.
And of course, Lewis Saunders for president, because everything ran through LS Contacts for the most efficient way to accumulate many pictures in one raster without any unnecessary activity.
Had it been possible to tile everything out the resolution would have been:
196,608 pixels x 50,560 pixels (which is a 9940.50 megapixel image).
This is how the current M series of Macs work. The system and GPU share RAM, so if you have a Mac Studio with 192 GB of RAM Flame should be able to use most of that for GPU/VRAM. If you have access to an M series Mac with more RAM than what your PC GPU card has, maybe it would be worth a try to see if it would help.