Why are Flame archives still so ridulously bloated?

I have a very simple Batch with just two small clips. The clips just exported out total about 300 MB total. The batch setup saved out is less than 1 MB.

Why, when I make an archive of this, is the archive 3.5 GB??

Especially with the whole remote workflow, passing among artists, this makes us do all sorts of goofy machinations. Archive the iteration after deleting all source media in the setup. Export out the source files. Artist 2 loads in the archive, and manually puts in the clips. (You can “exclude renders and cache”, but it still puts the media in a huge bloated archive if the media is something like a denoise pre-render, etc.

Plus the fact that sending out ascii batch setups, even using the same version, an error often comes up and they won’t load.

WHYYYYY the insanity?

The format that flame uses to store the footage in your archive is uncompressed regardless of the setting you have for your project.

I have stopped saving footage in my archives. All media is coming from a central server and all renders are write nodes going to the same server.

Not great for remote workflows but at least you have control over your file formats and compressions settings.

My archives are small. Like 50-100mb small. Keeps all of the timelines and batch setups but uncaches all of the media.

3 Likes

Another advantage to using write nodes is that you can save a setup to the server and create/update an openClip every time you hit render.

I use a Python script that pulls the openClip back into the batch. Great for preRenders like denoise.

5 Likes

I’m with Richard on this: avoid caching stuff to a stone. Treat flame like Nuke or After Effects where all media is pulled from the server and rendered back to the server. That’ll make your archives nearly zero in size, with the added advantage of moving setups between remote machines much quicker since the two parties just need to download the same footage and do a path translation in the prefs.

I use my stone as a cache in the “any of this can be deleted at any time” sense now: just for playback.

It’s a bit more work than keeping everything on the Framestore, but there are upsides.

As for WHY it’s all uncompressed: it’s so Flame can guarantee the restored archive will render exactly the same image. Once compression gets involved there could be (spooky voice) issues. (/spooky voice)

Now personally, I don’t care about the level of compression that, say, a Prores inflicts on an image (after all, we used to archive flame to Digibetas that are natively 4:2:2, so every archive and restore was a generational loss), but compression is something pixel fuckers care about and there is no way to shout, "ITS A TOOTHPASTE COMMERCIAL!" loud enough for any of them to not make a little vampire-warding-off cross with their fingers every time you say “compression”.

Simply put, this industry is rife with panicked pedants who don’t know why their head is gonna roll, but they know it’s about to, so they go around taping up all the leaks they can possible imagine and it doesn’t take much imagination to imagine an overly compressed image*. And so, you have a 3.5 gig archive. The next time a client asks for an uncompressed quicktime master, let them know they’re part of the problem.

  • I think fear of compression is generational. People born before 1990 can all easily remember the early internet days of god awful compression as we sipped the internet through 28.8k baud modems. It’s not going to be a thing my 9-year-old son is going to have any sense of. My phone shoots better quality images than 35mm film (speaking strictly from a resolution and image quality of a stored piece of media.)
12 Likes

Excellent points, Andy.

I do wish you could auttomatically create an archive that truly carrys zero media regardless of where the media lives. Don’t have to go through deleting clips, etc.

2 Likes

I would also like the flip side. A way of bundling all media, regardless of its local or server location and saving all relevant bits in a format that any software can use. Media managed if you will.

If it enabled me to choose the format then I would be even happier :nerd_face:

2 Likes

I am contemplating much of the same thoughts.

Sharing setups and media between ops all over the world is something I’d really like to make easier.

I’m keen to hear everyone’s best method to managing data. Right now, I feel there’s a lot I don’t know or a lot I haven’t experimented with.

The soft import method has a lot of pros and cons. And it certainly isn’t idiot proof. The holy grail is idiot proof. The old archive on a digibeta was great. Just send it on a plane with a runner.

Cache your media

Cache your media on import or cache all media that comes as part of a batch setup. This keeps it local to your box and you are not vulnerable to slow server speeds.

Use write nodes not render nodes

By using write nodes none of the media you are using in your setup is unique to your Flame. Local renders are only available to you.
Pre renders and final renders all get written out to the server and brought back in (cache on import).

Save your setups

Iterative saves are saved locally to your Flame project and are not available unless your archive your setups. Use the old skool file save to save the setup to the server. You can have this save automatically every time you render using a write node.

Have a good folder structure on the server

Shot folders should include all of the plates required for the comp.
Now with all of your file saves, pre renders and renders going to a shot folder on the server. Collaboration is easy since you just provide the shot folders you want remote artists to work on.

It gets tricky when elements are involved. I am still trying to find an elegant solution to this. I guess this is where some type of media management would be helpful. This is far from idiot proof. In fact it is very convoluted.

I’m probably not the best person to join this conversation since my situation cares not one wit about the sizes of my archives, and I rarely collaborate, but a couple of things: Am I mistaken in thinking that if you archive with the option Source Media Cache disabled, it archives with zero media? Also, I have never encountered an issue with trading ascii files and not had them re-open so long as they were zipped before sending. As far as sharing, I have also had great success in saving them to a shared cloud resource that the collaborator simply loads like any other batch. If they have a copy of the footage (and don’t decide to rename it all) the batch loads right up no problem.

Unfortunately in the last week I’ve had multiple ascii batches not open up on other artists’ machines, running the exact same version. Other ascii batches did open.

Disabling cache media on archive still bloats the archive with media, if they are pre-renders on your framestore. I know of the workarounds mentioned here but none of them are very elegant or efficient when working with remote contractors. Especially when having to work extremely fast on deadline. Which is like, always.

Hey Rich!

Are you saying you don’t cache anything on import, you just soft link the media? I’m totally remote now-days and find myself stuck in the old school method of ingesting everything then writing out huge archives. Sucks. I’m much better these days at whittling down an archive to its minimum but they’re still big (like 200-500gb).

It’s always bothered me that I’m duplicating media, particularly when my ProRess 4444 graded media is way smaller than what I output. I get it but still…

What do you do between sites? Do you mirror the job on both servers or just pass back what you need? Or none of the above?

Hey @drewd Nah. Always cache your media on import.

Otherwise you’re missing out on one of Flames greatest features. It is so damn fast.

You don’t want to be a slave to the server.
When ours is getting slammed by everyone it really grinds but us Flamers are smiling to ourselves wizzing through all of our media cached locally :kissing_smiling_eyes:

But one thing that confused me for a while was the Source Media Cache and the Renders and Cache in the archive options.
Same word two different operations.
Cache media will not double up on the media you imported. Effectively making all of the media on your archive a soft link.

I cache again if I need to restore it on another machine but it is getting it all from the server not the archive.

The other cache option. Renders and Cache Refers to the timeline renders and cached nodes (the controversial yellow button). Turn them both off for super small archives. No doubling up of media.

3 Likes

@PlaceYourBetts I see. You find the Flame terminology confusing??? It’s clear as mud! Like this!..

Screen Shot 2022-02-06 at 10.42.59

1 Like

I think what this means is that if you are using media that is uncached (soft imported), it will cache it and archive it when this option is selected. It’s different than archiving without any media at all. But I’m only guessing since 99.9999999% of the time I cache all my media and archive all my media.

Are y’all using ftp clients to transfer files?

Digital Pigeon

In theory, sending the batch setups (file and associated folder) should be easy. Done it many times I the past. However this week I was copying and pasting them between machines and for some reason it wasn’t working. When opening the copied batch the flame was complaining it couldn’t find many nodes. It was bizarre.

Yes, that was the issue I was having!

Grant gave a great explanation when I asked about this before: Flame project and HUGE ARCHIVES - #5 by La_Flame

1 Like

The other option worth mentioning is a hybrid approach. If you create a project directory hierarchy on your media cache, with shots, plates and renders directories, and you work unmanaged by publishing out and when you’re done with the show you can create a metadata only archive that’s pretty small. To @PlaceYourBetts point then you are no longer tethered to the server and can work quickly and if you install import open clip from the portal the whole process becomes pretty seamless. You render your write nodes and they appear in your reels. And speed isn’t an issue because you’re playing from cache file system.

When you’re done, just zip up the project along with the metadata archive you made and you’re set.

The thing I like about this approach is that it can be dumbed down or ramped up in complexity depending on needs.

1 Like