Let’s give this a try…
First things first. This approach relies on having a fast cache for Flame which most people have. Conceptually the idea is that rather than managing your project hierarchy for shots inside of flame, you relegate flame’s management of the project to editorial only and setup a template based shot workflow based on a publish to compartmentalize each of your shots into a single external location, in this case your flame’s cache, in order to:
- Manage the filetypes you store all of your intermediate work in.
- Manage the “way and when” you archive your media and metadata.
- Maintain maximum performance while maintaining maximum flexibility.
- Minimize the time and space required for flame archives.
In this little example I used my cache NVME, but it could easily be any other fast filesystem, even a removable one. So first thing’s first… are a project and note the location of your cache →
Mine is mounted at /mnt/StorageMedia. So then, I open a file browser and go to that location and drop a templated project, which I rename to logikProject. This is a localized version of our NIM template which has our own brew of folders and organizational containers. Here’s what the root level of the template looks like →
Everybody’s got their own secret sauce. This just happens to be ours. The _04 show’s folder is important. That’s where were later going to export our conformed edit as shots into what we call a “show” folder (which is just a fancy name for the edit. But before we can publish we launch flame, load an aaf, do some conforming of Arri Wide Gamut material from a removable drive attached to a Mac which we auto convert on conform to acesCG. I am caching NOTHING. Nada. Once we’ve edit checked, we name all the shots. I tend to do it simple, in this case logik_010, 020, 030… etc. I also name the segments based on shot name, material type and layer. So a track one segment for the first shot would be called logik_010_raw_V01 in this case for example.
Then we go into export with our edit, navigate to the aforementioned “shows” directory and select publish and grab my handy dandy template for exr publish to shots →
First thing is the pathway to the published media. Everything will go into individual shot folders starting from the current working directory. Everything is tokenized to avoid any error. I pull out all possible tails and I publish to DWAA 45. Those things are amazing, small, fast and efficient. Next I also setup the publishing of each shot’s batch files and open clips →
Each shots batch files and renders are relegated to each shot’s shot folder. That means that all metadata, renders and plates for shot 010 for example are all contained inside the single root folder, logik_010. This is so important for later being able to hand off individual shots because you’re essentially creating a single container for each shot. Then you publish which in my case looks like this →
Each of the 20 odd shots gets it’s own folder in the parent show, 60sec. Inside each shot folder we have something that looks like this →
…where the batch files for comp are in the comp/flame/batch folder and the img/plates are exrs in the plates folder of each respective shot. Eventually, batch renders will also end up in img/comp but that comes on render. On the timeline, we’ve got a new published timeline, linked to the exrs in the project folder which absolutely play in realtime despite being fucking gargantuan and 16bit. We’ve also got a new primary track of open clips who’s metadata is liked to the publish which will later allow us to select renders from our batches and have them appear directly in the timeline without having to change anything other than the version selection box. The publish also directly imported all the batches to the desktop, but I usually just delete them all since they live in the project anyway →
Next we go to batch, load batch, navigate to shot 010, load it’s batch and do shit. You’ll notice that that are no render nodes, only write nodes. In the render settings for the write node you’ll note that all of the pathing is already setup and pointing directly at the img/comp/ folder of each render. I personally like to have renders follow iteration so I’ve selected that option. By doing this, v02 of my batch will match the v02 render of exrs sitting on disk. I don’t need to change anything else because it’s all there. Then I render →
On disk, I’m rewarded with the shot folder being updated accordingly with batch files and render →
AND, the resulting write node’s output clip magically appears in the batch shelf batch renders reel! This hocus-pocus is achieved with a script from the logik portal which auto imports open clips post-render to a location you decide. I don’t use these clips for anything other than checking my renders (as I would have, had I rendered using a render node). For me it acts as a tiny bit of glue making the process more reminiscent of the managed workflow. The script settings are accessed from the flame menu and look like this for me →
And as I mentioned before, I only use those renders for proofing. When I want to update the timeline, I use the version selection button in the timeline ribbon for the segment. Updating the timeline is as simple as selecting the new version I just rendered →
Again, and this is sooo important: I haven’t cached anything. Nothing. Everything is still playing in realtime because all of these files are being played from my cache filesystem, so it’s fast fast fast. But now to the point of this discussion. Let’s say that it’s time to archive. Well, we don’t want to cache ANYTHING because, in effect, everything IS already cached as it’s on the cache filesystem. What we want to do is create a metadata only archive of whatever we’ve deemed important enough to keep inside of flame. For me, that’s really only editorial. Remember, all of the shots are being stored external to flame in their own shot folders, so we really don’t need to archive any of those. Any metadata setups that you may save in flame own project hierarchy are important and all editorial and reel structure is important so that’s really what I’m after archiving.
So I create and archive with these settings →
…and archive project and close. This is only a 60sec spot and not complete by any order but all of the data flame needs for me to get back to exactly this place is, 60 megs →
Now, on the current show I’m on, with a 3min, 90/60/30, 6 teasers and 20 socials, the metadata only archive of my project was 30gigs–I think that is totally acceptable. I can also pass that archive to any other flame in the shop and within 2 mins they are up and running at the exact point I’m currently at. No lengthy archive in process. No issues with missing frames or anything. It’s fast. Fucking fast.
Another aspect of this approach that I really like is that you can select an individual shot and right click to one-button a zip or tar →
…which you can then send to anyone. All the person on the receiving end needs to do is add a path translation for /mnt/StorageMedia/logik/_04_shows/60sec/ to wherever they choose to drop the shot and they are off to the races. When they send it back, it’s the same on my end.
Anyhow. I know this is more about unmanaged workflow than anything but I hope it helps @johnt