Flame archives are uncompressed, no matter what you set the project cache to. But that doesn’t mean you can’t compress the archives after the fact.
In today’s episode of “Me On My Soapbox,” I just finished a multi-week big game campaign with hundreds of tiny comps and hundreds of deliverables. The entire project with grade and audio and artwork and flame archives was 2.2TB. Used Keka.io to make 700mb split *.7z archives with ‘normal’ compression, which took under 16 hours, and brought the entire archivable size down to 1.3TB.
An a decent machine it takes an hour per 100GB to compress. Do it. And, as soon as the individual *.7zs are written you can begin uploading to cloud storage.
Compress your $%!#!
Or get your client to buy the storage devices, archive your work, give it back to your client, let them deal with the cost of keeping the archive.
Like every other industry.
Good to know! Cool stuff! Thanks for sharing.
Are you doing all of this because of the price of cloud storage? It seems like an awful lot of time and effort. LTO-6 tapes are $25/each. I guess a drive is like $1500, so there’s that cost to consider. Also, if you ever have to unarchive that job, you’ll now need 3.5 TB of space before you can even start loading it back into Flame.
Buying LTO, for me, doesn’t quite make sense. Sure, you can find old ones on eBay for sub $2K, and tapes are reasonably priced, and the new ones are only around $5k, but then I need to either buy a fecking fireproof and waterproof safe to store them in, or deal with long term storage offsite. Boo.
Most of my jobs in include moving archives around at the end of the job…whether it be to my client or a facility. and, for the vast majority of us that don’t have unlimited bandwidth and time, it makes sense. For anyone on typical home upload speeds in the 20/40Mbits upload range, it makes sense.
It’s only 3 clicks to start the compression, and 3 clicks and saving 40-80% storage is worth it for a sole lame like myself.
Cloud storage is cheap…$6 per TB per month…but for most of the stuff that won’t come back and I’ll delete in 6 months anyway, why not?
It’s not a workflow for everyone, but, it surprises me how rarely it’s even discussed as on option.
I get it. I guess it’s more the time aspect I was reacting to. Does compresion and decompression run in the background, or does it tie up your machine? Is it faster to decompress, or is that another 15 hours? Most of my unarchiving needs are along the lines of “restore this one spot’s timeline to change the phone insert a year later,” and with LTO-6/LTFS that’s a restore that takes 10 minutes.
The archiving happens in the background and I have other machines to offload if I have to. It obviously requires cpu cycles and the more you have the faster it goes. Restore speeds are about 200 gigs per hour. It requires the machine but doesn’t consume it.
And with LTO you likely have either engineers or assistants to order tapes/label tapes/swap tapes/store tapes, or you’ve got a 5 figure LTO robot. I’m a guy. In my basement.
I assume you were doing Gzip, maybe even Gzip -9?? This is really slow. Keka supports Zstd which is way more modern than gzip. Not sure of how many option for Zstd Keka supports, but I would not be surprised if you got close to gzip sizes in a small fraction of the time with Zstd. Do a test, report back.