Workflow discussion: Best practices for reducing massive Flame archive sizes

Hi everyone,

I’ve been working on some heavy VFX and finishing projects lately (including 8K sources, extensive beauty work, and complex 3D compositing). My biggest concern when wrapping up these jobs is the size of the Flame archives. We’re talking several terabytes per project, which is turning into a real nightmare for long-term storage and backups.

I wanted to ask the community: what is your go to workflow to drastically reduce archive sizes, without losing the ability to properly restore the project months down the line?

Do you purge all caches and renders to only keep the setups (batches) and raw sources? Do you use specific compression settings when archiving? Do you have any scripts or a “housekeeping” routine before kicking off the archive?

I’d be really curious to hear how you manage this in your pipelines!

Thanks for your input,

Alexandre Rouanet Freelance Flame Artist @Linecraft

1 Like

I’ll preface my answer with a couple of observations. You’re going to get a boatload of different answers. None are going to be either right or wrong. They are going to be what works for each of us. Your experience may differ. Secondly, I know “several” is a relative term, but most of my jobs run in the 3-6tb range and I don’t consider that anything close to a nightmare. At a certain point I’m far more concerned with framestore space than with archives. I currently run about 13tb of space and I’m upgrading to 24 shortly.
Different types of work require different approaches. I work for an editorial company and I make commercials. My jobs frequently encompass 5-25 base spots plus social aspects. Most batches are not overly complex, but many jobs require 20-40 separate batch groups. I cache everything I am given and I usually start archiving once I’ve loaded the first delivery of colour. If I’ve been doing a lot of work in the offline process with editorial, I’ll start archiving sooner. If I’m doing a lot of preliminary work with RAW footage before I get colour, I sometimes make a separate offline archive. Once I have started a job in earnest, I archive every night. If, on any given day I make just a few scant revisions, I will archive immediately, and go spend 5 minutes getting a snack and some exercise. On long jobs, those archives can get pretty hefty, so every once in a while I might run through the batches and delete old renders and perhaps clean up other obsolete material, sucha as the RAW footage I used for offline, start a new archive, and when it’s done, delete the bloated one, but I usually don’t do that until the archive starts topping out at 5 or 6tb. Anything smaller, I don’t bother.
As far as hardware goes, I have a 50tb facilis that I do my daily archive to. When a job wraps and I’m ready to delete it from my flame, I make sure I have a second backup on LTO8. Each tape holds 12tb. By the time I’m ready to clear it off my facilis, I make sure I have a second LTO backup. The LTO is mounted on a dedicated archiving machine in the machine room that I share with the editorial department. We use Yoyotta as our backup application. I try to keep those archives on the facilis for as long as possible. I usually have a decent idea of which jobs might come back within a few months and which ones might come back in a year or two, and which ones I’ll never see again.
When my producers start bidding refreshers for some of those older jobs, they give me a heads up and I can usually pull them off the LTO onto the facilis again overnight and have them loaded into flame in just a few hours. Seldom has that been too long of a process. But I also am one of those people who never puts shit like that off until the last minute. Ever. I miss the time when flame interfaced directly with the LTO.

3 Likes

Publish. Always Publish.

Use compression. Uncompressed 8k is 105MB a frame. DWAB is 20 ish.

Prune old renders from shared storage.

Archive no renders and uncached. Flame archives should be megabytes.

Fuck 8K. You can denoise 8k, resize to 4k, do the work, resize back to 8k, renoise, looks great.

9 Likes

I did forget to mention Fuck 8k.

3 Likes

I think this is what can bring the world together in these turbulent times…

4 Likes

Fuck it right in the ear.

3 Likes

This is definitely the way to keep things more manageable. Working unmanaged using DWAA/B will make your world so much better. And yeah, 8K is bonkers.

There are quite a few tools out there to help you get started. Of course there’s the amazing Logik Projekt which is probably the best way to get started.

To wrap everything up for an archive, you can use collect_media which creates a list of all the used files in a project. You can choose how you want to archive that later (i.e. rsync/rclone to a folder that sits beside the Flame archive).

There’s also version_prune to do what @randy mentioned which is trimming old renders from your filesystem to keep things manageable over the course of a project.

When you do create a Flame archive, omit source media, timeline renders, etc and you’ll be left with something that is beyond small.

Once you get into the hang of it, it’s a lovely way to work. Especially when you embrace openClips.

2 Likes

Canadian Lol GIF

Of course, the upside to ignoring size and archiving everything is that everything I need to bring the job back to the exact same state that I left it, sometimes several years later, fits in the palm of my hand. In my case, the need to do that arises frequently.

And then soft-edge wipe only the actual work area of the comped 8k render back onto the orig 8k plate to preserve the plate (the matched grain needs to be spot on obviously).

Also, if it allows for, isolate-cropping out the work area from the 8k source plate to minimize processing resolution, doing all the work on that cropped plate and then putting the comp/work back onto the source plate right at the tail end of the pipeline is also an excellent way to minimize all that pixels overhead.

You don’t even have to soft matte it if for some reason you can’t.

HI everyone, and a huge thanks to @ytf, @randy, and kyleobley for your invaluable feedback!

I’ve taken away several excellent ideas to optimize my upcoming projects:

  • Resolution: Working in 4K before upscaling to 8K (thanks @randy for the video, I’ll look into it closely to make sure I do it cleanly and losslessly).

  • “Unmanaged” Workflow: Using DWAA/B compression to lighten the renders. Working in 8K is indeed bonkers, so this is clearly the way to keep a project manageable!

  • Optimization Tools: I’ve taken good note of using Logik Projekt for the structure, collect_media to isolate and archive media separately, and version_prune (kyleobley ’s tip) to clean up old renders on the fly.

Now I just need to get the hang of this workflow and fully embrace openClips! By omitting source media and timeline renders when archiving, you end up with tiny Flame files, which is awesome.

@ytf, your pragmatic approach and hybrid system (daily Facilis storage + LTO8/Yoyotta backups) sound incredibly robust. But above all, it reassures me regarding the “maximum” archive sizes: knowing that 5 to 6 TB projects are the norm reassures me a lot!

Speaking of hardware, this brings me to an additional question for the community: What exactly do you use for your storage? Are you all exclusively on pro servers / high-end NAS (like the Facilis), or have some of you found alternatives (cloud, hybrid solutions, specific external hard drives) to manage these massive volumes reliably and more affordably?

Thanks again everyone!

I’ll paste this here as it is probably worth a read: Publishing / unmanaged framestore workflow AKA how to make smaller archives for those that don't archive good

1 Like

Thats the best thing about publishing. Ya hardly need anything locally. 1TB system disk for Flame, a 4TB NVME for a Lucid Link cache, a 4TB Flame NVME disk, is all ya need.

If you aren’t Lucid or equivalent, then writing over 10GB or better to share storage is all ya need.

I do have a few leftover Highpoint NVME enclosures with either x4 or x8 NVME slots, but them are kinda being phased out.

If you don’t publish, then yeah, you’ll need lotsa fast local storage. But if you do, you don’t.

I havent filled up a 4TB Flame frame store in about a year of working on commercials. And I havent deleted shit.

Average project is yeah, between 2 and 6TBs for a single or 2 day shoot. but only a few hundred digs is in the shots folder, so, we can keep the published stuff around for a long time, and just prune the OCN and shoot drives to S3.

Quick reminder that Flame archives everything as uncompressed.

So if you tell Flame to archive source media, that ProResLT you got from color is now uncompressed media taking up 10x the amount of space.

-Ted

1 Like

One thing to look out for whether you are going the full archive or the metadata only/unmanaged route, is motion vectors. If you cache motion vectors Flame by default will include those in an archive regardless if it’s metadata only. These files are not exactly small.

You can prune them manually before you archive and if you needed to restore and work on a shot again, re-cache. That may not work in all instances. But if you have a huge archive, check for motion vectors.

1 Like

I’ve brought this up occasionally over the years, but to mitigate the heavy data costs of uncompressed Flame archives — just please archive the media in the codec that it had upon ingestion.

I understand the concerns that the developers have made when this has been brought up, that they want to preserve every single pixel with fidelity, and so they don’t want to risk any loss through any compression.

I think most users would say that the case of bringing in an archive from a wrapped project later, and re-rendering a comp, will not likely alter the visuals in any way.

So it’s a balance between fidelity and efficiency. I personally don’t need the archives that I never will open again, to have absolutely perfect fidelity to every pixel. I would like to be able to archive things quickly and not have to keep paying for so much storage, for something I’ll never open again.

1 Like

I would suggest that there should at least be the option to archive with original codecs. This would give users control, and solve multiple problems.

We used archive in Digi. Later some folks archived on D5 and even Hdcam. We somehow survived to tell the tale…

1 Like

In the over 30 years I’ve been using Flame, I’ve never once heard them say this. They always use the excuse “we don’t know what codecs will be available in 30 years, but raw RGB always will be, so we want to be forward thinking”. It’s a fucking kick the can answer. Do you know how many fucking side apps, libraries, system services Flame installs. Like fuck, If you assume that Flame spaghetti will even run in 20 years on some AI OS, then why can’t you assume that it will be able to read OpenEXR which is a global industry standard, and free. The real answer is, they don’t want to touch that nightmare called Archiving. Look what happened when they tried to do the absolute minimum viable product revitalization of the S+W undersystem with 2026. It’s a total shit show, and brings almost no benefit, and 1 year later still un-proven stability.

Guys… Flame is going to change very little from what you have currently, indefinitely. Our whole industry is now in suck up as much money, with as few resources possible, because it is all going away. I interpret Flame as being one TPI Report away from Maintenance mode.

*Get It, Save It, Enjoy It.*™