@maz As i am too a one man artist in my studio i keep things tight, this discussion has led me to the thought of perhaps changing my local SSD of my flame to exactly what finn and chris mentioned. Ill use my server to store my project, ill use my 1gbt NAS to make a copy of current project if i need to share my project to another facility via dropbox and synDrive. That is working really well, now i should also make a Syndrive link to my SSD from my server so my current project lives on my SSD as a superfast cache⌠So no more flame caching as its reading directly from the SSD local. the sync copies it to my ubuntu server that has the largest capacity. then to the NAS if needed for offsite sync. I like it! and then the upgrade costs would be for faster speeds. currently i have fiber between my flame and server and my bottleneck is the server RAID. but thats totally doable and functional. All this Syncing is all automatic, and is way cleaner than âcachingâ on the flame the project which takes a long time if i have large exrs and then the archiving can be instantaneous, and further more ill have 3 copies of my project. Ill have to look into absolute paths or whatever so i dont run into issues if i open my project down the track from my server instead of my SSD if i need to do a little tweakâŚ
Im really thinking of the current hardware that i have right now and how best to utilise it so forgive me if theres a way better way to set this up from scratch but this is what i have now. Ubuntu PC that is my server -18Tb ( bunch of 7200rpm drives nothing fancy), a linux flame with 12TB SSD cards hardware raid internal. and then i have a slow Synology that frankly ive been using only for photo/video/surveillance system for years. I have one client that loves dropbox and he has a server at his office that i need to sync my project to. So as i was having a hard time with dropbox on ubuntu lately ( it never synced and i had to pause and unpause all the time) i started using Drive ShareSync between my synology and my ubuntu server. What im thinking of doing is syncing it to my SSD flame as well and not cache anymore.
So ill effectively have my project sitting on my internal storage in flame continually being backed up to server, and if i need it off into the world of dropbox i can sync that too but that part is not necessary. RSync to backup once a night for instance or if i can get Sharesync to work on a non synology machine like rocky ( i need to research that bit)
This could be amazing for loading shots and accessing media all uncached now, and also archiving. no need to include media anymore and fast as hell.
If you continuously update your project once a mth lets say i would have an offline project repository on your larger server side, maybe just date your project folders and be a good housekeeper so you keep your project small and nimble at all times, offline unsused or older timelines/shots to a non synced project folder. and keep backing that up and offloading it as much as you can. Your archive sizes of 14Tb seems to be super large even for 1hr shows? all depends of compression of course/file formats etc.
I wouldnât go proxy, it used to be problematic and changed the way masks worked and was generally more buggy than useful. perhaps its better. i dont think thats the way to go.
another way to keep sizes down is to always consolidate larger media files⌠never have massive 2000 frame clips laying around if you are using 48frames of it. Consolidate and write them out to your current project would keep sizes down considerably on longer form jobs.
Good file management and investing in the read/write speed between flame and storage is key
If you get a storage solution that is as fast as you need, then you dont even need any local storage⌠you have this and some fiber and your done. easy. no sync, no muss no fuss.
i was waging exactly that 12 bay nvme qnap with 25gbit against lucid.
i have never seen flame perform as fast as with lucid ona local cache, not even with 40gbit fibrechannel to a ssd SAN.
local storage will always win against any network storage in âraw speedâ for my part i am juggleing timelines like a madmen and cached nothing in flame which feels great
The storage landscape presents significant challenges for users looking to expand beyond Synology. While DSM and Synology Drive are rock-solid tools, their recent moves to lock down hardware choices and their aging components have pushed users toward other options. QNAP and TrueNAS stand out as the main alternatives, but both come with serious trade-offs.
QNAP brings better hardware to the table, with smart features like QtierAuto storage tiering and lots of room to expand. The problem is their OS has been shaky, and their sync client just doesnât match up to Synology Drive. Itâs tough to risk switching when remote collaboration is crucial to your workflow.
TrueNAS gives you complete freedom with hardware and drives, letting you build exactly what you want. But their collaboration tools - usually NextCloud or Seafile - canât touch the smooth experience of Synology Drive. Plus, youâll need to be comfortable with more hands-on management and troubleshooting.
Thereâs a middle ground worth considering: running Synologyâs DSM in a virtual machine on custom hardware or TrueNAS. This lets you keep using the software you trust while breaking free from Synologyâs hardware restrictions. The catch is youâll need solid technical skills to set it up and keep it running smoothly.
For someone heavily invested in Synologyâs ecosystem, especially relying on Synology Drive for remote work, switching platforms is a big risk. Having reliable, seamless tools often matters more than raw hardware specs. Before making any moves, you need to be sure the benefits outweigh disrupting a workflow thatâs currently working well.
I agree here, however we have deployed GoodSync instead and it works very well for remote collaboration, but I agree synology drive is pretty good for what it is.
Iâve come to look at these tradeoffs differently over time.
The complaint about cost of standard solutions, and restrictions on choice is very common and very real. If youâre early in your career, or you have a young business and funds are much more limited than time, this makes reasonable sense and often you have no choice.
That said, once youâre in a stable and more budget friendly state of work, itâs time to shed these tendencies. The tradeoffs in terms of non-billable time spent, the disruption at the worst possible time, the lack of support when you want it - they can be all good fun and entertainment, but in the end theyâre not good business, and they do get old over time.
Unless youâre a disruptive start-up that has to chart a new path where there are no standard solutions. But that doesnât really apply to most of us here.
My current Linux Flame is a non-standard config. I decided on it for two reasons - one, is I like dealing with Puget Systems a lot more than with HP or Dell. Secondly, at the time of purchase, 12th gen Intel CPUs had just reached the market, but it takes HP and Dell 6-9 months to roll those into their high-end configs. So it was either by dated design or go un-supported, and find my own optimal solution.
The system works nicely and I enjoy working with it. And my hardware support is great. But Iâve had numerous times where I spent a day or two having to figure out things myself. And I have had several support tickets with ADSK which were most likely bugs in their software, but where their devs where not allowed to help, because Iâm on a non-standard system. My âFrustrating Day in Flameâ saga being just the latest.
And on a job I just finished, the production company had migrated from a Synology to a TrueNAS box. For a number of valid reasons, they made a hardware change to the box mid-project which caused the motherboard to fail. The NAS was unavailable for several days during a critical deadline. It was all handled with some backup hardware and all is good. But another reminder that going off the beaten path can add a non-substantial tax, which often can outweigh the savings, or make the joy of free hardware choices a mixed blessing.
I will be in the market before long to replace my 2018 Synology which is badly aging. But in the end I think a rackmount Synology with supported drives may be the best answer.
Has anyone looked at OWC Jellyfish as an alternative?
Iâve heard people talk about it, and run into people from their team, but have no first-hand experience. Itâs been around for quite some time, and is a post focused storage solution rather than generic, which is a plus.
If you price out a 64TB HDD based unit with 2x 10G NIC, it is listed at $10,280, which includes install and one year of service.
That is cheaper than a Nexis (Pro is $16K for 40TB), and supposedly quieter as well. But youâre stuck with SMB and NFS, and the occasional Apple SMB silliness.
On the other hand if you get a 8-bay rackmount Synology (RS1221+) plus 8x 8TB Synology qualified HDD, and a single port 10GbE card, thatâs $3,830.
You might be able to build a TrueNAS server for less yet.
Or if you went higher and got a 16 bay Synology (RS4021xs+) with 16x 16TB drives, 2x 10GbE, etc. for $14,679. And that is with Synology approved drives - and 256TB of raw storage.
Seems to be Synology isnât that bad of a choice unless you really need to DIY or scrape the piggy bank.
Iâm sure the Jellyfish install ($500) and support contract ($1.8K/yre) is valuable if you truly want to be hands-off, high touch. But Synology seems to be the middle ground - you get support, you donât have to DIY, but it doesnât break the bank either.
The Truenas Mini R is 12 bay, absolutely silent, built in dual 10Gb nics, 32GB RAM. $1,850 ish.
For those with a Puget affinity, they now have an offering as well, though I donât know all the details:
Thatâs along the lines of JellyFish in terms of product.
Iâd definitely go TrueNAS if I was looking at a cheaper solution myself. We already have an enterprise solution.
If I was setting up an enterprise solution today, Iâd probably go Quobyte on BYO hardware. It has clients for all 3 major OS and distributed file systems have a whole lot of bonuses. You just need multiple storage servers and good networking to implement it.