our project setups often get in the multiple hundreds of GBs. One project we have currently is about 505GB. These are complex batch setups, for hundreds of shots, with tens of iterations. So many millions of small ascii files that Flame TARs first. ASDK used to use gzip for TARring the Clip Library backups, not sure if they even do that for the Project Setups to File Archive. If they do that, well then GZIP is so fucking slow. At least I was able to get them to switch to ZSTD for the Clip Library backups which is 7x faster than GZIP.
Got it . . .
This is also one of the reasons why ZFS is so great. We enabled ZSTD compression on that file system, and that 500GB of setups actually only take up around 100GB on disk. And since our frame stores are also backed by ZFS, any integer clips (which normally are stored as uncompressed DPX) get automatic block level compression, and that does wonders with graphics and mattes.
@ALan - ZFS variable block size… crazy amaze balls…
@johnt - import a large FBX and convert it to an action file, iterate - watch your setup directory balloon.
Well now; I have done that many times and was not aware of a big file size. Why is that? They’re not huge the fbxs. Just rough geos
skip the not huge fbx files and import a large one into an action in batch.
iterate the batch.
adding a link to a file weighs a negligible quantity of bytes.
converting fbx or alembic will create large action files, since the geometry data is now written into the action file.
action files are human readable, not binary files.
binary files can be compressed.
(it’s also true that you could compress a human readable action file but action won’t load your carefully compressed zip file)
fbx can be an ascii or a binary file - one is slightly smaller than the other.
you know all of this - i don’t even know why i’m explaining it.