Publishing / unmanaged framestore workflow AKA how to make smaller archives for those that don't archive good

Let’s give this a try…

First things first. This approach relies on having a fast cache for Flame which most people have. Conceptually the idea is that rather than managing your project hierarchy for shots inside of flame, you relegate flame’s management of the project to editorial only and setup a template based shot workflow based on a publish to compartmentalize each of your shots into a single external location, in this case your flame’s cache, in order to:

  1. Manage the filetypes you store all of your intermediate work in.
  2. Manage the “way and when” you archive your media and metadata.
  3. Maintain maximum performance while maintaining maximum flexibility.
  4. Minimize the time and space required for flame archives.

In this little example I used my cache NVME, but it could easily be any other fast filesystem, even a removable one. So first thing’s first… are a project and note the location of your cache →

Mine is mounted at /mnt/StorageMedia. So then, I open a file browser and go to that location and drop a templated project, which I rename to logikProject. This is a localized version of our NIM template which has our own brew of folders and organizational containers. Here’s what the root level of the template looks like →

Everybody’s got their own secret sauce. This just happens to be ours. The _04 show’s folder is important. That’s where were later going to export our conformed edit as shots into what we call a “show” folder (which is just a fancy name for the edit. But before we can publish we launch flame, load an aaf, do some conforming of Arri Wide Gamut material from a removable drive attached to a Mac which we auto convert on conform to acesCG. I am caching NOTHING. Nada. Once we’ve edit checked, we name all the shots. I tend to do it simple, in this case logik_010, 020, 030… etc. I also name the segments based on shot name, material type and layer. So a track one segment for the first shot would be called logik_010_raw_V01 in this case for example.

Then we go into export with our edit, navigate to the aforementioned “shows” directory and select publish and grab my handy dandy template for exr publish to shots →

First thing is the pathway to the published media. Everything will go into individual shot folders starting from the current working directory. Everything is tokenized to avoid any error. I pull out all possible tails and I publish to DWAA 45. Those things are amazing, small, fast and efficient. Next I also setup the publishing of each shot’s batch files and open clips →

Each shots batch files and renders are relegated to each shot’s shot folder. That means that all metadata, renders and plates for shot 010 for example are all contained inside the single root folder, logik_010. This is so important for later being able to hand off individual shots because you’re essentially creating a single container for each shot. Then you publish which in my case looks like this →

Each of the 20 odd shots gets it’s own folder in the parent show, 60sec. Inside each shot folder we have something that looks like this →

…where the batch files for comp are in the comp/flame/batch folder and the img/plates are exrs in the plates folder of each respective shot. Eventually, batch renders will also end up in img/comp but that comes on render. On the timeline, we’ve got a new published timeline, linked to the exrs in the project folder which absolutely play in realtime despite being fucking gargantuan and 16bit. We’ve also got a new primary track of open clips who’s metadata is liked to the publish which will later allow us to select renders from our batches and have them appear directly in the timeline without having to change anything other than the version selection box. The publish also directly imported all the batches to the desktop, but I usually just delete them all since they live in the project anyway →

Next we go to batch, load batch, navigate to shot 010, load it’s batch and do shit. You’ll notice that that are no render nodes, only write nodes. In the render settings for the write node you’ll note that all of the pathing is already setup and pointing directly at the img/comp/ folder of each render. I personally like to have renders follow iteration so I’ve selected that option. By doing this, v02 of my batch will match the v02 render of exrs sitting on disk. I don’t need to change anything else because it’s all there. Then I render →

On disk, I’m rewarded with the shot folder being updated accordingly with batch files and render →

AND, the resulting write node’s output clip magically appears in the batch shelf batch renders reel! This hocus-pocus is achieved with a script from the logik portal which auto imports open clips post-render to a location you decide. I don’t use these clips for anything other than checking my renders (as I would have, had I rendered using a render node). For me it acts as a tiny bit of glue making the process more reminiscent of the managed workflow. The script settings are accessed from the flame menu and look like this for me →

And as I mentioned before, I only use those renders for proofing. When I want to update the timeline, I use the version selection button in the timeline ribbon for the segment. Updating the timeline is as simple as selecting the new version I just rendered →

Again, and this is sooo important: I haven’t cached anything. Nothing. Everything is still playing in realtime because all of these files are being played from my cache filesystem, so it’s fast fast fast. But now to the point of this discussion. Let’s say that it’s time to archive. Well, we don’t want to cache ANYTHING because, in effect, everything IS already cached as it’s on the cache filesystem. What we want to do is create a metadata only archive of whatever we’ve deemed important enough to keep inside of flame. For me, that’s really only editorial. Remember, all of the shots are being stored external to flame in their own shot folders, so we really don’t need to archive any of those. Any metadata setups that you may save in flame own project hierarchy are important and all editorial and reel structure is important so that’s really what I’m after archiving.

So I create and archive with these settings →

…and archive project and close. This is only a 60sec spot and not complete by any order but all of the data flame needs for me to get back to exactly this place is, 60 megs →

Now, on the current show I’m on, with a 3min, 90/60/30, 6 teasers and 20 socials, the metadata only archive of my project was 30gigs–I think that is totally acceptable. I can also pass that archive to any other flame in the shop and within 2 mins they are up and running at the exact point I’m currently at. No lengthy archive in process. No issues with missing frames or anything. It’s fast. Fucking fast.

Another aspect of this approach that I really like is that you can select an individual shot and right click to one-button a zip or tar →

…which you can then send to anyone. All the person on the receiving end needs to do is add a path translation for /mnt/StorageMedia/logik/_04_shows/60sec/ to wherever they choose to drop the shot and they are off to the races. When they send it back, it’s the same on my end.

Anyhow. I know this is more about unmanaged workflow than anything but I hope it helps @johnt

41 Likes

This is one of those times that hitting the “like” button doesn’t feel like enough…thanks for spending the time writing that up @cnoellert!

4 Likes

Legend.

2 Likes

Fuckin EPIC @cnoellert

2 Likes

Thank you so much for sharing this!

1 Like

Thank you, Chris

2 Likes

There should be some kind of sub-forum that is just coppied/clipped posts with things like Chris’s workflow.

Like “Tutorials” or something.

8 Likes

@cnoellert mind blown! Amazing setup, thanks so much for sharing!

2 Likes

This is awesome but, more than anything, highlights how useless flame’s framestore management is…You’ve completely replaced stone+wire with yourself which is a pretty big indictment of the system. Why doesn’t Flame structure it’s framestore in a similar, project->clip based way?

I know it’s a hard problem to solve: Managing data is a huge part of what we do. It’s the kind of task that can never be completely deferred to The Machine. Engagement with the process is the only way to avoid the many nasty nasty surprises that loom upon the arrival of a drive, a digital pigeon or an aspera link.

I guess this post stirs some deep frustration I have with Flame that I fear will never change.

Anyway, there are a lot of ‘golden nuggets’ of wisedom in your post. thanks heaps for sharing! I will try and yoink what I can for my own process.

3 Likes

I love this workflow, I just wish it could work with ProRes.

Write File nodes only export image sequences.

2 Likes

Click here to vote for exactly that

1 Like

Happy Sunday friends! We’re going to take a look at publishing on Logik Live today at 2pm ET.

ep_78_thumbnail01

3 Likes

AWESOME this is GOLD ! thank you for sharing your knowledge in such a comprehensive and explicit fashion. .

2 Likes

I’ve been experimenting with the publishing method on may latest project. So far so good following the @Josh_Laurence method (thanks Josh). Couple of questions though…

  1. How do you deal with artwork/supers? In the spirit of not importing anythingI tried reading them in through a bfx which was fine until I tried to export a wip - kept throwing up an error and wouldn’t finish the export.

  2. Still not sure how to handle shot layers? My quick workaround was to make each layer a unique shot and then copy it into the main shot. I’m sure there’s a better way…?

  3. I find the imported BATCH_GROUPS a bit confusing? @cnoellert you mentioned you just throw them away. Why does Flame import them? Seems like no one uses them or am I missing something?

  4. @cnoellert how do you deal with edit changes in your workflow?

Just to see the difference I archived the job and cached to see the difference in size. roughly 100gb for the whole “Published” project (not just the Published folder but the whole job folder including the graded renders) and 250gb for a full Flame archive. Assuming I’ve done this correctly that’s an enormous difference!

1 Like

Hi Drew!

  1. For supers in a spot, I usually do import them as links and not caches. If you’ve got an internal team or it’s just you making them, you can play with the naming and use the pattern browsing to get versioning functionality so that as the clients adjust their requirements based on additional feedback “from the rest of the team”, you can just append the file with new versions and save in the same directory. They then get the versioning in the timeline and you don’t have to fiddle with import/fit to fill/move around on the timeline.

  2. Shot layers can be handled two ways:

  • Use the <track> token in the Rename Shot process. This will produce a track number in the where you want it. As with any token that produces a number, you can control the numbers by adding hashes - "t<track###>" will make t001.
  • Use the <index> token. The index token is very helpful in that it numbers all the shots uniquely based on record timecode. So that will result in a unique number. I use "sh<index####@10+10>" which results in “sh0010”. I don’t use this token very often, though. It produces a permanent shot name unlike using <segment> or <version name> which can be set to Dynamic and helpful with burnins that reference that information. But still, it absolutely gets the job done when you’re moving fast and need that unique name NOW.
  1. For the imported batch groups, I find them to be one amazing feature because I use them. Each shot becomes it’s own batch setup and the write file is designated to make the versions AND the open clip is already linked to the result. I have the system put them in /Libraries/DESKTOPS/<name>/Raw_Batch_Groups/ which makes a library called Desktops, a folder within with the name of the published sequence, a folder inside there called Raw_Batch_Groups and all the batch groups with the shot names just drop in there. When I’m working on a team, I put the Raw_Batch_Groups in the shared library and everyone can take a setup that will render to the shared storage and the results show up in the timeline. When I’m on my own, I take one or sometimes many and put them on a Desktop and start working on them. Then I save the Desktop back in the folder with the name of the published sequence and everything is traceable.

  2. For edit changes that fall outside the published handles, it really depends. If it’s tails that have been added to the shot, I’ll drop the new shot with the same start frame in the batch group and set the duration of the write node to fit the new duration. I NEVER touch the TC fields. That gets messy fast. I there are heads on the shot, often I’ll just make an additional publish, take the new batch group and paste the old work in there, slipping the Animation to work. I don’t know why, but published write nodes don’t work so well when you copy/paste them. I’ve had terrible things happen with that before and I just avoid.

Congratulations on taking this step, as you get more comfortable with it, it’ll pay off for you!

Thanks Josh! I’ll admit I’ve not really delved into Pattern Browsing so I’ll give it a go. Still unsure why I’d get an error on export but will try again using this method and see what happens.

As for the Batch Groups I think I’ll head back to the Logik Video you and Andy made and walk through it again. I get that each shot becomes a batch and write file is automatically imported into Flame but is this just to get going initially and then in order to save batches you have to either rely on the “save before render” to the file path or manually save that batch in a desktop? Or do you direct your renders to that location. Sorry this is a very unorthodox way of working for me - not wrong of course just different.

How do you deal with populating a shot that appears in more than one edit? Is there a way to employ a Source Sequence? I tried one method of re-conforming the Output Clips to a new reel group and attempted a “Create Sources Sequence on two edits that shared the same shots. All worked as expected but of course I lost my “Shot Names” and undid all the goodness that was made with Publish. I assume you could build a Source Sequence first and name your shots on that sequence and then Publish? Or maybe there’s just a simpler way?

Hi Drew!

The first place I’d look to when getting errors on export is the Segment Clips. Very easy to get tripped up by them. They inherit the original source name. So if your source is A012_C006, then that’s the name Flame will give your Segment clip. If you have more than one clip with that name, and they’re all headed into the same directory, your Segement clips will stop the publish because they’re getting overwritten. The best way I’ve found to deal with that is to put them into their own directory based on <shot name>.

If you’ve enabled the setups to be saved, the writefile node will include the directory structure where to put the batch setups and where to go later on to find them again if you need them. You don’t have to save. You don’t have to iterate - it’s done for you and linked via the OpenClip to the render. The renders go where they’re supposed to, the OpenClips go where they’re supposed to, the batch setups go where they’re supposed to.

You can absolutely use the Connected Conform workflow, you just make your Sources Sequence and publish that. If you don’t, you can always just use the OpenClips as conform sources. If you have to do so after the fact, just unlink from the original source, relink to the OpenClips. I wouldn’t worry about losing the “Shot Names” on relinking, as long as you’ve linked to the Open Clips, whenever you version up the render, the versioning will appear in that conform.

It sounds like you’re really close!

Yours,
Josh

Sorry @drewd i totally missed this…

If it’s just me working in a job, then no I won’t throw them. There’s no reason to. If there are other folks working on a show it usually pays to load the batch from the shots folder and work from there.

…whoa and this one too :joy:

If I get a completely new shot I just add it’s source(s) to my shots sequence, given them shots numbers and name the segments, copy em out of the shot sequence to the desk and publish them with the same shot based publishing template. The pubbed linked segment which are returned after the publish are copied into my pubbed shot sequence and then I’ll typically overcut those segments back into my master edits.

It sounds complex but really it’s only one additional step for me above and beyond that of a managed connected conform workflow.

If the new shot is just an old shot shifted around but still in tails, I just shift it around. If it goes out of tails I give it a half shot number up (so if the original shot was 10 the new shot that goes out of tails by 2 frames because fucking editors) gets named 15 and goes through the process above.

Ha! No problem. I think I like the idea of just opening up the batches through the folder.

I’ve gotten into the habit of saving the iteration (I just keep writing over version 1) so I always have a current backup rather than eating till I want to write out snore render. Not sure if this will cause me issues down the track but so far it seems to be working.

I’ve experienced a few more crashes since using publish but I’ve not gotten to the bottom of it yet

1 Like