its just as fast really, its a little bit different but it enables collaboration while before there wasnt any so it is what it is, many have switched and are ranking in the benefits, stuff renders directly to the timeline with versioning its like NukeStudio on steroids, its really good
i was very skeptical at studios with no local framestore but im a believer now. If the SAN is fast enough and it can be, its amazing to not have to cache anything. its fast and so easy to push setups around artists. This thread has inspired me that i can plan for this at my home studio too - I thought 25Gb switches were way more expensive
Maybe it wasnât in 1994 but in 2024 it works just fine. Not as amazing as it could beâbut just fine. The biggest hurdle is artist mind-state.
Write-node only workflows and no cache archives are in my mind what makes Flame unique todayâ where the old stone-tax days get blended with more modern and pipeline-able approaches to effects-editorial.
yeah they have a big storage thats all NVME i think as its crazy fast. and no one caches anything. large 1TB setups are pushed around in literally seconds so theres more collab on shots. For me at home the caching on a job i was on last week took ages and filled my 6TB SSD storage so quickly so i had to keep everything uncached for the cg renders. In this instance i wish i had a faster server side storage. a52 does write out renders or you can do local renders for your own wips. its a nice way to work as the write out always updates the timeline making people take more care in what gets published. I figured cheap 7TB hard drives are so cheap i went that way to archive my projects. it is never ending.
@finnjaeger thank you for yoru hint on syn Drive. its working well on my linux server, so for last 2 days it is syncing my server side project to my slower NAS and in turn it syncs to a dropbox for client side cache. It seems more stable than dropbox on ubuntu for sure
Do you set your flame project to DWAB? Ive been meaning to change from uncompressed as i want to get more efficient in my data storage. What do you all recommend as a good format to use without losing too much info? ive never been brave enough to chose anything but uncompressed!
i hear you on the $$$$, Cost effective older machines can be turned into servers, Local storage is still cheap and ill stick with it for now. i just ordered some more SSD for my PCI raid inside flame. 10GB SSD internal is pretty cheap these days! keep that way until its necessary for next steps id say
I think they take it seriously but itâs like all things, subject to hit-rate, availability of resources⌠The good news is that everyday more and more artists understand that there is a better way to work, whether youâre in a group setting or not.
Given that the tools and workflows that are being used are official workflows and tools from Autodesk thereâs nothing really hacky about it at all. I think itâs all about comfort level. Easing in to the idea of it all. Honestly once youâve flipped versions on your timeline to show a client what a shot âwasâ and now what it âisâ with the effortlessness openclip workflow affords you, youâll never want to use anything else for commercial projects.
you can also rclone your nas over 10gbit giving you cache free crazy speeds ⌠i like the term âlocalisationâ rather than th type of caching that flame does, sources usually dont change so âcopy to local storageâ is a much better idea than âcachingâ/re-encoding imho.
âŚalso, and this is the point @finnjaeger is trying to make, he could have gone faster network and more SSDs and all that, but instead, keeping his existing infrastructure in place and adding a SSD cache volume to each of his machines essentially gives him a boost beyond whatâs possible with 25gig-E infrastructure for his specific set of needs (which actually happen to be quite similar to my own).
It effectively allows every node on his production network to function at a rate of data-transfer that would otherwise be improbable or even impossible, regardless if the node has a physical network connection to the production network and regardless of geographic proximity.
Not really. What Iâm saying is that in each workstation, you put a ssd. When your Lucid filesystem mounts, it basically stores all data that you read and write to that filesystem first on the aforementioned ssd cache. In the case of a write, it writes to the ssd cache at stupid high data-rates. and then once itâs written, Lucid transfers the file back to whatever bucket in the cloud youâre using in the background, making it available to everyone else on your filespace. So the user experience of say exporting feels more akin to writing to a local drive.
Writing initiates a fetch from your cloud bucket, downloads that file to your cache and then youâre reading it at ssd rates. Yes, you have to download it first, but once itâs there and cached youâre reading it at ssd level speeds.
Now, there are mechanisms you can use to prefetch data to your local cacheâessentially localizing data before you actually request it so itâs just there waiting on your cache. That process is called pinning. Itâs like the Dropbox âMake this shit available offlineâ or whatever the fuck itâs called. At any rate, in our particular market sector, you as an artist could pin a job at the beginning by right clicking on a root level of the job folder on the filespace and selecting pin, and then everything everyone write to the job will be immediately localized on your cache volume without you having to request it.
Basically a localized copy of the job that updates in realtime without you having to do anything.
âŚall the pathing for where Lucidlink mounts is controlled by the admin of the filespace. So they define where the volume mounts for EVERY machine that mounts the filespace. In my case our production filespace mounts at:
/Volumes/cloud for Mac and Linux and L: for Win2k.
So whether youâre at home, on-prem or in Sweden the pathing is always the same. A good example is that I can be working on a production machine, archive out the whole project uncached which is like a 64meg archive for all of my active timelines, open that archive at home on a Mac and all of my timelines are just there. Thereâs no relinking because the pathing all remains the same.
what @cnollert said , nuke has a feature build in for just that.
If you look t reading the same files 2000x over the network is ridicolous if they dont change, so vloning themnonce to loval fast storage is where its at.
Your archive sizes are a function of working managed and archiving that managed media.
You import an edit from an offline house
Populate that edit with cached media from that nas
Breakout that edit into shot groups on the desktop with cached media
Render managed media out of the batch groups render nodes
Swap that managed media back into your edit
âŚThen you rinse and repeat until your edit is comprised of media which exists nowhere other than inside your stone fs partition.
When you go to archive you have no choice other than to archive out massive amounts of data that youâve created inside flame because thatâs the only place it exists.
By compassion, you couldâŚ
You import an edit from an offline house
Populate that edit with cached media from that nas
Breakout that edit into batch groups on the nas/desktop via a shot based publish template and
Render unmanaged media out of the published batch groups write nodes
Select the rendered version of a shotâs write node in your edit and cache the result.
âŚThen you rinse and repeat but this time your edit will only be comprised of media which exists primarily at another location other than inside your stone fs partition.
When you go to archive you can archive without cache because all of your media exist outside of Flame. How effective this strategy is solely relies on how vigilant the artist is in not creating media inside Flame.
So network speed only matters if you donât want to automatically cache your media inside Flame as you work. If you only have 10gig (which is incidentally all we have) it doesnât affect the overall strategy of making small archives, it simply eliminates one stepânot caching for playback on 4.6k frames lol. No matter what youâre not going to archive cached, so that aspect doesnât really matter in one regard. The big factor is moving your renders external.
thundebolt networking maxes out at 20Gbit, which nets you more like 16Gbit.
what kinds of files are you reading that need that much bandwidth, as long as you dont cache ârawâ on a framestore thats on your nas .
the only way I see betwork speed matter is for shared environments where you are having a shared framestore.
And if you are a single person why does any of this even matter? even if you want to work like the old days, you could have a 1Gbit nas and just cache everything to local fast storage. ?