Project Clean ups

Anyone know if there is a way to get Flame to remove media (or even just highlight it for manual removal) from the project that has not been used in any timeline or Batch?

I want to reduce the stuff being saved /archived at the end of a project but i can’t find an easy way to highlight whats used and what isn’t. Surely there must be a way?


Maybe I’m missing something, but couldn’t you just archive your batches and timelines?

Or, controversially, flush all your caches and archive no media?

i don’t use caches - never have! don’t archive media (except for batch and timeline fx renders).

its more that sometimes a project gets bloated with too many files that were loaded up then for whatever reason not used.

If i have to re open an old archive i only want it to relink whats relevant, not every single unwanted file that was considered and rejected.

Ummm…hmmm…gotcha…parse the OTOCs and cross reference that with some kinda path on the server?

1 Like


Write an archive. See if the archive online table of contents has file paths. Somehow put all those paths in a script and chuck the files on the server that aren’t in the otoc.

1 Like

There are maybe easier ways, but this method can be useful at times:

if we accept that Flame renders are managed by Flame’s archive, and want to cull the files that were imported (external “unmanaged media”), what I’ve been doing is:

copy all material to a desktop reel
splice Reel
open rec seq
create sources sequence to remove duplicates,
Publish AAF without media
in Resolve inport AAF and relink the clips.
select the timeline, then File/MediaManagement
choose copy sources
choose new directory location
choose preserve hierarchy to 0 folder levels (or whatever)
select start

Resolve will recreate the directory structure and clone all sources into the new directory. There’s maybe some gotchas, but it basically works.

Use path translation to link to the new directory, or when possible use the old path if the old material is backed up and moved.


ahh…i see what you mean…thanks for the suggestion Randy. I will give that a go.

wow…thats a lot of work, Peter!

I was hoping for a hidden “Clean up Project” button.

1 Like

I think After Effects has had an edge in this respect for a long time


You could match out the segments of your timelines to various reels (like footage, graphics,etc.) then delete everything but your timelines, and folders with the matched footage. They may not help much if there is bloat in your batch groups, but they tend to take care of themselves, so long as you keep them tidy as you work. I often do this if I need to get an archive down to a size that fits on an LTO. It is much the same as Randy said, but it helps with organization to have elements broken out into separate folders/reels.

1 Like

@Lightningad Not sure if it’s relevent for you… working in TVC land here FYI. I only archive batch’s and timelines, sometimes stray artwork (supers, logos etc). The trick with batches is to keep them relatively clean as you go (no random clips of 1254 frames where your only using a still for example) and then only archive the last batch (or 2). This assumes that earlier batch’s were full of stuff (like early CG renders) that are now usless… ideas that didnt work or were rejected etc etc… The last batch being the one that made it into the final cut. All the other iterations I delete right before I archive the job. This is usually a few weeks after the work has been delivered. The good thing to do is to also archive the “project set-ups” as this contains the old school batch files (no media) but has all the earlier iterations just in case someone wanted to roll back to earlier versions (it would still require tracking down renders, clips etc etc). This has actually never been the case for me… old iterations on TVC’s never get a second life from my experience. So what you get is the the lightest possible archive size, only clips and things that went into making the FINAL pictures.


Seriously. AE’s “Collect Project” is so simple and so good.


If you want to reduce the size of a project archive on a current job then the methods above are all good suggestions. They seem a little bit manual but a good way to tidy up an existing project and compress it down to its core size. @justinbromley I have been playing with a python script that will clean up and remove all but the last five batch iterations for all of your shots. One button. Wham!

But if you are trying to reduce the size of your archives then I have an alternative approach. I guess the question is what gets backed up? Not just the Flame project. Probably the job folder on the server as well.
How much of what is in your job folder is also in your Flame archive? All of the graded footage probably. The CG renders. If this is sitting in the job folder and also included in your Flame archive you are doubling up.

Our average size of a job folder on a TVC, containing CG, is 2TB. The Flame archive comes in at 90GB.
By not caching the media on your Archive you archive only soft links. By using Write nodes instead of Render nodes all renders go to the server and brought back in by using OpenClips. The file setups also get saved to the server by way of the Write nodes. The conform is published to shot folders on the server so that all source material lives outside of the Flame.

We still use Render nodes for quick test renders but we make sure that the render destination is to the Batch Renders Library which gets cleaned out before archiving.

This approach removes the Flame archive as the central storage for all of the comp. It keeps it small and manageable. It still contains all of your latest batch groups and significantly your timelines. If you need to tweak the end shot for a future iteration you just restore the folder structure for that one shot and your tiny Flame archive. You can locally cache your media again to help keep thinks moving fast. You can even get into a single shot without the Flame archive if you needed to.


I kind of want @miles to jump in with why soft imported workflows and metadata heavy timelines and meta data only history and batch versions/iterations and clean NFS & SMB permissions can make life straightforward.
Or any of the other converts.

Especially when moving feature length timelines with 1500-2000 shots between temporary rented/broken/new workstations and all that.

Most people that get exposed to naked flame see the power of it, the flexibility of it and more importantly the portability of it.

Some people still insist on stones.

There’s room for everybody.

The main feature from AE that i wish Flame had!

1 Like

Deleting iterations, renders.
And when I create the archives I use the settings ‘dont cache’.
Because I already consolidate and cache everything when working and.
Don’t archive renders (I can always re-render from the batches.
Our archives include ungraded footage (conform) and graded masters, versions, artwork, comp, etc)
But still our TVC work flame archives are on average 200 GB.
Sometimes 4 TB, sometimes, 90 GB, depends on the project of course.
We archive it to our server, and those are getting transfered to our LTO robot, so if we need to change something, or an adjustment is needed a couple of months later, we can restore it very quickly.

This is kinda long and vague. Jumping in now.

My favorite workflow is something @philm taught me many moons ago. This doesn’t do the After Effects wizardry with the whole collect files thing, but it could. It does have some pretty awesome features by default though, and now that Python has come to Flame (and I think its implementation will only get better). this workflow is even better. The first big job I used it on was this music video: Imagine how many tracks. By the time I finished the conform the timeline had over 70 of em. The size of the first archive was 4Bytes. Bytes. I was able to archive it and share a live …ish timeline with my partner in about 60 seconds. Up until very recently I’ve had zero problems with the workflow (and i don’t think it was the workflow). As far as saving space for backups etc this assumes you back up your NAS or whatever central storage, nightly, which everyone I know, does. So you’re still creating data, you’re just not doubling it. It can be tricky to setup but once you got it you got it. For the most part no media is managed by the Flame. All setups are linked to the versions, of the clips and accessible from the timeline. Not like a BFX way (which they also are) but like click on the segment and say open in batch group way. During that music video we had a catastrophic block wide power out. Both Flame’s DBs were corrupted past fixing … quickly. I blew away the jobs on both Flames unarchived the project, which took 60 seconds and was immediately up and going; this was 3 months into the job, so lot’s of crap in it. This workflow is not for the undisciplined. Write nodes only, that includes prerenders. I have a bunch of python now that makes that painless. @randy had an idea about parsing OTOCs, which got me thinking that you could also parse the last however many batch setups to narrow down your final server archive, if you even wanted to.
When I 1st saw all of this work I about fell over. About fell over this last time too when I shared another massive job with my partner who is on the other side of the bay, not in the room next to me, and it just worked. I’ve never seen real documentation on it and it’s not my workflow to expound too much on, but I bet @philm would be into some consulting :upside_down_face: There are a few gotchas; one gotcha you have to either python your way around or just always have it in mind is the iteration button is by default is only good for upping the version of the batch, you have to find a way to also save the batch setup files to the different locations per shot in the openclip hierarchy (python). I could go on and on. But won’t.


Or you could use the built-in Autodesk supplied Python script to delete all batch iterations save the last one, and Archive the last project with that one button in the Archive module that allows you to choose whether you archive all timeline versions or just the ones you used.

Also, with your Batch Setup deal, I dont think you need Python for that. With each Write Node you can enable Include Setup, which writes the Batch Setup that was used to feed the Write Node into a user definable path. I was testing this when I was researching a simple rsync only Flame only workshare pipeline using cloud storage and Google Drive.

So, to load a Batch Setup, just navigate to your server/jobs/project/sequence/shot/flame_setups folder and boom. Fran’s Your Nan.

We might be saying the same thing and Im probably missing your initial point. Shocker.

1 Like

Hey mate. What I’m sayin is a couple things. If you backup your central storage then you don’t have archive anything but metadata from the flame, so just archive everything. Batch setup deal works like this; once you render the main write node with this workflow it automatically saves the setup in the right place. The trick though is when you iterate (which I do a lot to back up progress and version up etc) Flame saves the setup in Autodesk/project hierarchy or where-ever you point that upon project creation, but not in the openclip path. I’ve made it so every time I iterate it does everything it’s supposed to, plus puts a copy of the setup in the openclip path. Nothing needs to happen outside of Flame, and I don’t need to point the setup directory any place special upon project creation. It really does feel like magic. If you wanted to have a small Flame archive at the end then yes you could just archive the latest iteration or whatever. But that usually wouldn’t cut it for me. And phantom footage ya know @randy, like liquid stuff that changes the color of water :slight_smile: is something I have no interest in doubling up on. I think the idea of a collect files functionality is a good one, and doable. But you most certainly need python. As a matter of fact this is now the next thing on my list. Thanks.

1 Like