Filsync macOS

Ah, right. I think this is a bit harder to architect. If you want to attach DAS, then your system will want to run the filesystem and the external drive is really just a block device. Kind of like the SSD cache in a NAS, except the persistent storage lives in S3 rather than some spinning drive.

Makes cache management a bit harder, since the device and back-end doesn’t know as much about the filesystem (could read it but not manage it, would have to rely on block traffic analysis). And the front end wouldn’t know it’s in the cloud and all that entails and how to optimize for it. It’s just not an optimal interface.

I’m sure it could be built, but not sure if it’s ideal. Then I’d rather have a NAS. More versatile. And 10GbE NICs aren’t hard to get by. I have one on as a TB-3 dongle you can plug into any MBP.

Seems so. Looking at documentation there are two USB-C ports that can be used for archive/ingest, but also run Ethernet over USB-C. So you can skip the NIC, and get NAS functionality over USB-C I guess? Not a bad option tbh.

1 Like

usb-c 2.5 Gbit max, nah i am looking at something a bit faster, sort of a better option to running some kind of filesync on machine just to make things more simple for the enduser…

If they know how to deal with a nas and selective sync etc they probably allready have one :joy:

You’re particular…. :joy:

I see two paths:

  • hire good freelancers that have the right infrastructure and now how to use it.
  • build a custom NAS box that you can load with your own sync scripts. All they freelancer needs to do is plugin and mount.

Downside, you to get the hardware back afterwards

1 Like

Yea id rather go for #1 i just like to make it as convenient as possible for them :joy::joy:

1 Like



If you have the time and the knowledge then you could probably build something using Ceph.

When I was researching storage I was blown away with what Quobyte could do. Don’t think it would work in your scenario as it is more multi-syncing storage clusters but man it was cool.

We ended up with PixStor because it better fitted our use case and they would guarantee performance but it was a close call with Quobyte a very close second. Think it is much larger scope than what you are looking for but worth a look. It’s also a complete storage solution rather than what I think you’re trying to do. So frigging cool though so I think the tech nerd in you would be interested

1 Like

Remember the old project management paradigm

You can only optimize for one, get a second one, and loose on the last one.

So if you want good & easy, it’s going to cost;
… if you want easy and cheap, it’s not going to be good;
… if you want good and cheap, you’ll have to wait another decade.

In my mind, if you want to easy setup and quality solution, you’ll have to go with one of the 3rd party solutions. Maybe Lucid with some other layers by some system integrator or something more fancy.

If you want fast and cheap, you need to find the freelancers that don’t mind a klunky home-made patch work and make it work. Problem is those, who fit that bill, likely have their own idea on how to do that, so you’ll become a circus master.

Tinkering is nice to a degree, but at some point in time stuff just needs to get done, and so you have to just pay for it and then send someone else the bill. Flexibility in staffing levels comes at a price.

Otherwise you end up here:


They tried to make a word play on this paradigm. Except, when you read it carefully, it means they optimize for fast and cheap, which would mean it’s a van full of crap (from China). Which is more or less accurate these days with Amazon.

Be careful who you hire for your marketing…

PS: In fairness to China, in the early days when they were just cheap labor and didn’t the know the products, it was all cheap crap. But now that China has a middle class, the products they design and use themselves are quite good, sometimes better than ours. Of course they still produce cheap crap for export they don’t care about and laugh their way to the bank. We have family there now, so we get on-the-ground reports.

Whilst I mostly agree with this, there are definitely exceptions to the rule

Blender is an excellent example.
Linux is another.
The ML timewarp from @talosh is definitely another excellent example directly related to Flame.

They are few and far between though and you certainly need to be informed when going down an open source software path. On a lot of projects though you’d be looking at a similar level of support that some of the commercial offerings have.


Yes, there are always positive exceptions. But the theory of the triangle still holds, with slightly different measurements.

With open source projects…

You get low cost / cheap / free.

The quality in open source projects can vary, in your examples quality is excellent, but not a given in general terms.

The problem is time and support. The next release of an open source project will come when it will come. If the core developers gets busy with a job that pays the rent, things might be on hold for a while. And while some developers are better than others in supporting their products, you cannot count on it.

If you use open source tools in mission critical / time constrained projects you’re rolling the dice.

Of course we’ve seen some developers be more responsive than the paid packages from big vendors. But at least in theory you can write a ticket and get a response/help in some determined fashion.

Companies go bankrupt and leave you hanging. And open source developers get bored and move on or fail the bus test.

There has been good writing on open source though, that the reality looks quite different than one might think. Most open source these days are not idealists but mostly corporate employees that serve some greater agenda.

Linux these days is a hybrid to address these issues, and we saw the fight between RedHat and Rocky.

This is a good read on the topic (may be behind Medium’s paywall, apologies if it doesn’t work): Open Source Is Struggling And It’s Not Big Tech That Is To Blame | by Jan Kammerath | Medium

Key quote, from a presentation of some big EU committee:

I asked her what funding was associated and whether there are any bounties for implementing any of their concepts. She looked at me confused and responded; “No, the Open Source community should implement it now”. I asked her whether she knew how Open Source actually works, if she had ever met any Open Source project teams, had ever written any software herself. You can guess the answer: it’s No.

Don’t get me wrong, I’m not saying all open source projects are great but there certainly are a hell of a lot that are well thought out and implemented with excellent community support. FreeNAS/TrueNAS, openZFS, Ceph, GFS are all good examples in a storage & file system scenario. . There is generally a corporation sponsoring the project in the hope that someone will be prepared to pay for support or utilise the paid version in a commercial situation. Often Opensource often acts as a beta testing environment.

So yes, open source software definitely needs a bit of research before deployment but often it’s awesome and someone trying to share and improve something they’ve come up with community support. Look at what @ALan has done opensourcing party time for instance.

I’ve deployed several open source projects in facilities and am yet to get burnt by any.

In reference to Adam’s post.

Oh, most definitely. We wouldn’t be anywhere near where are technology wise if it weren’t for open source. If we had to pay OS licenses for all those data center servers that run in the cloud, most of the tools we use would be unaffordable. And many other things wouldn’t exist, as they would only live in research environments. We would still be using Internet Explorer and Safari, and the list goes on.

Open source is very powerful in the hand of tech savvy users who can largely support themselves. And it works well in hybrid situations where less tech savvy users can benefit from it, but can rely on technical support services that fill the gap (Red Hat as example for OS users).

But when I brought the triangle up earlier today, I was more thinking of Finn’s quest to find a storage solution, and possibly being a bit optimistic of conquering the triangle. When you do some of these things for yourself, time is typically pretty flexible. If it’s a business though, the variables change a bit. Time becomes much more valuable and there is more ways to allocate budget.

PS: When I first started working on OS code, Linux didn’t exist yet. It was either some derivate of AT&T System V or FreeBSD if you got to play with source code. And we had all kind of old implementations of VAX and Nixdorf, but those were closed runtimes.

When Covid hit I implemented Resillio Sync.
Basically put a SSD in every local workstation (15ish)and named it the same as the share from the QNAP storage VFX dept use. Worked for win and Mac.

Did a local LAN sync (selective per artist) at about 600MBs (10GbE)
They took the workstations home and carried on.
Only issue was opening ports at some peoples homes.

Still use it now on some jobs to my Mac at home. Office is 1GbE wan and home I get exr’s at decent speed.

If we get a freelancer on Mac they just create a new virtual disk on the internal or we send them an SSD to use as the same path as on-prem Qnap and setup Resillio and away they go.

You can customise resillio using advanced to check for files more often, ignore certain files per client.

Then we just pushed a nightly backup of the QNAP to B2 just in case :slight_smile:

Now as the office is hybrid were all on Teradici.

I forgot to add a link to Iconik. Not sure if it would suit your use case or not @finnjaeger
iconik | Cloud Media Management and Collaboration

1 Like