Synology alternative

no thats strictly forbidden.

Exeptions are timelines, but nothing is hard committed thats used downstream, only for rendering for playback in timeline .

so when i archive my timelines i expect the archive to be in the kilobytes.

No media is rendered inside flame only we treat flame like nuke with a completely filebased workflow , its so much better for collaborating.

every version of each batch is saved on disk , so i dont think having extra archives of this would help anything. bur yea those would also be tiny.

its just as fast really, its a little bit different but it enables collaboration while before there wasnt any so it is what it is, many have switched and are ranking in the benefits, stuff renders directly to the timeline with versioning its like NukeStudio on steroids, its really good

i was very skeptical at studios with no local framestore but im a believer now. If the SAN is fast enough and it can be, its amazing to not have to cache anything. its fast and so easy to push setups around artists. This thread has inspired me that i can plan for this at my home studio too - I thought 25Gb switches were way more expensive

a52 peeps here can attest to it. It works very well there

Maybe it wasn’t in 1994 but in 2024 it works just fine. Not as amazing as it could be—but just fine. The biggest hurdle is artist mind-state.

Write-node only workflows and no cache archives are in my mind what makes Flame unique today— where the old stone-tax days get blended with more modern and pipeline-able approaches to effects-editorial.

yeah they have a big storage thats all NVME i think as its crazy fast. and no one caches anything. large 1TB setups are pushed around in literally seconds so theres more collab on shots. For me at home the caching on a job i was on last week took ages and filled my 6TB SSD storage so quickly so i had to keep everything uncached for the cg renders. In this instance i wish i had a faster server side storage. a52 does write out renders or you can do local renders for your own wips. its a nice way to work as the write out always updates the timeline making people take more care in what gets published. I figured cheap 7TB hard drives are so cheap i went that way to archive my projects. it is never ending.

@finnjaeger thank you for yoru hint on syn Drive. its working well on my linux server, so for last 2 days it is syncing my server side project to my slower NAS and in turn it syncs to a dropbox for client side cache. It seems more stable than dropbox on ubuntu for sure

1 Like

Do you set your flame project to DWAB? Ive been meaning to change from uncompressed as i want to get more efficient in my data storage. What do you all recommend as a good format to use without losing too much info? ive never been brave enough to chose anything but uncompressed!

i hear you on the $$$$, Cost effective older machines can be turned into servers, Local storage is still cheap and ill stick with it for now. i just ordered some more SSD for my PCI raid inside flame. 10GB SSD internal is pretty cheap these days! keep that way until its necessary for next steps id say

I think they take it seriously but it’s like all things, subject to hit-rate, availability of resources… The good news is that everyday more and more artists understand that there is a better way to work, whether you’re in a group setting or not.

Given that the tools and workflows that are being used are official workflows and tools from Autodesk there’s nothing really hacky about it at all. I think it’s all about comfort level. Easing in to the idea of it all. Honestly once you’ve flipped versions on your timeline to show a client what a shot “was” and now what it “is” with the effortlessness openclip workflow affords you, you’ll never want to use anything else for commercial projects.

At least that’s my perspective.

2 Likes

On and off for maybe 4 years. Dedicated for the last 2 or so. All commercials projects and yes you can work this way from home.

you can also rclone your nas over 10gbit giving you cache free crazy speeds … i like the term “localisation” rather than th type of caching that flame does, sources usually dont change so “copy to local storage” is a much better idea than “caching”/re-encoding imho.

…also, and this is the point @finnjaeger is trying to make, he could have gone faster network and more SSDs and all that, but instead, keeping his existing infrastructure in place and adding a SSD cache volume to each of his machines essentially gives him a boost beyond what’s possible with 25gig-E infrastructure for his specific set of needs (which actually happen to be quite similar to my own).

It effectively allows every node on his production network to function at a rate of data-transfer that would otherwise be improbable or even impossible, regardless if the node has a physical network connection to the production network and regardless of geographic proximity.

Not really. What I’m saying is that in each workstation, you put a ssd. When your Lucid filesystem mounts, it basically stores all data that you read and write to that filesystem first on the aforementioned ssd cache. In the case of a write, it writes to the ssd cache at stupid high data-rates. and then once it’s written, Lucid transfers the file back to whatever bucket in the cloud you’re using in the background, making it available to everyone else on your filespace. So the user experience of say exporting feels more akin to writing to a local drive.

Writing initiates a fetch from your cloud bucket, downloads that file to your cache and then you’re reading it at ssd rates. Yes, you have to download it first, but once it’s there and cached you’re reading it at ssd level speeds.

Now, there are mechanisms you can use to prefetch data to your local cache–essentially localizing data before you actually request it so it’s just there waiting on your cache. That process is called pinning. It’s like the Dropbox “Make this shit available offline” or whatever the fuck it’s called. At any rate, in our particular market sector, you as an artist could pin a job at the beginning by right clicking on a root level of the job folder on the filespace and selecting pin, and then everything everyone write to the job will be immediately localized on your cache volume without you having to request it.

Basically a localized copy of the job that updates in realtime without you having to do anything.

Does that help?

1 Like

…all the pathing for where Lucidlink mounts is controlled by the admin of the filespace. So they define where the volume mounts for EVERY machine that mounts the filespace. In my case our production filespace mounts at:

/Volumes/cloud for Mac and Linux and L: for Win2k.

So whether you’re at home, on-prem or in Sweden the pathing is always the same. A good example is that I can be working on a production machine, archive out the whole project uncached which is like a 64meg archive for all of my active timelines, open that archive at home on a Mac and all of my timelines are just there. There’s no relinking because the pathing all remains the same.

1 Like

what @cnollert said , nuke has a feature build in for just that.

If you look t reading the same files 2000x over the network is ridicolous if they dont change, so vloning themnonce to loval fast storage is where its at.

Cachin in flame is just a bit oldschool.

1 Like

It sounds to me that if your archiving sizes are your main issue it’s not infrastructure that’s the issue but workflow.

You already know what I’m going to say next…

1 Like

Your archive sizes are a function of working managed and archiving that managed media.

  1. You import an edit from an offline house

  2. Populate that edit with cached media from that nas

  3. Breakout that edit into shot groups on the desktop with cached media

  4. Render managed media out of the batch groups render nodes

  5. Swap that managed media back into your edit

…Then you rinse and repeat until your edit is comprised of media which exists nowhere other than inside your stone fs partition.

When you go to archive you have no choice other than to archive out massive amounts of data that you’ve created inside flame because that’s the only place it exists.

By compassion, you could…

  1. You import an edit from an offline house

  2. Populate that edit with cached media from that nas

  3. Breakout that edit into batch groups on the nas/desktop via a shot based publish template and

  4. Render unmanaged media out of the published batch groups write nodes

  5. Select the rendered version of a shot’s write node in your edit and cache the result.

…Then you rinse and repeat but this time your edit will only be comprised of media which exists primarily at another location other than inside your stone fs partition.

When you go to archive you can archive without cache because all of your media exist outside of Flame. How effective this strategy is solely relies on how vigilant the artist is in not creating media inside Flame.

So network speed only matters if you don’t want to automatically cache your media inside Flame as you work. If you only have 10gig (which is incidentally all we have) it doesn’t affect the overall strategy of making small archives, it simply eliminates one step–not caching for playback on 4.6k frames lol. No matter what you’re not going to archive cached, so that aspect doesn’t really matter in one regard. The big factor is moving your renders external.

Does that make sense?

1 Like

thundebolt networking maxes out at 20Gbit, which nets you more like 16Gbit.

what kinds of files are you reading that need that much bandwidth, as long as you dont cache “raw” on a framestore thats on your nas .

the only way I see betwork speed matter is for shared environments where you are having a shared framestore.

And if you are a single person why does any of this even matter? even if you want to work like the old days, you could have a 1Gbit nas and just cache everything to local fast storage. ?

1 Like