LucidLink Rugpull

Peering is not a good storage synchronization option for small companies or even large groups of small companies owned by the same two or three holding companies.

It’s the same reason why the whole distributed ledger nonsense is nonsense.

But it’s the 2020s - there are loads of solutions available - and most likely soon - Amazon storage infrastructure as a service basics…

This site always has lots of good news

Including recent changes at weka

That depends a lot on the optimization points and how cost is allocated, and if you really consider all the cost.

A centralized master (aka cloud storage aka LucidLink) has advantages in terms of simplicity of deployment, locating critical and high performance infrastructure in one place where it is highly leveraged and has high utilization which can be spread among many users. Assuming proper deployment and redundancy uptime can meet high standards. Required infrastructure is procured and managed at industrial scale and volume pricing rather than consumer/small business rates.

If it weren’t for the cost, I don’t think a lot of people would complain about these solutions in terms of features and usability. One of the key pain points of cloud storage isn’t even the actual storage cost but the egress fees.

I’m actually curious how much of the egress fees actually covers actual infrastructure cost vs. is highly marked up because it’s non-optional in most solutions, and is not universal to all cloud apps.

AWS reported an operating margin of 38% in the last quarter. Very different from the razor thin numbers in retail or hospitality. An indication that there’s plenty of milk they get away with.

A P2P model on the surface has a cost advantage over cloud, primarily because most of us don’t use metered connections and are playing the averages when it comes to our actual bandwidth usage.

It works decently in a small number of nodes, and is very good when there are just two nodes. But the more nodes you have, the more data has to be exchanged between nodes, and a node may have to be taxed up to n times the data volume rather than just 1 transfer to the cloud. Considering that individual nodes often have less powerful infrastructure that can create bottlenecks.

Also in a P2P environment the nodes have to overlap in uptime to facilitate transfers, whereas cloud central can function asynchronous. In a large network, individual nodes may have to upscale their specs to meet performance demands, yet that incremental hardware investment has a pretty low load factor and is thus not a good investment.

It’s also harder or more complicated to enforce locking for conflict avoidance in a P2P graph vs. a centralized repository. Which is why Resilio and other P2P sync tools fall short.

While P2P node graphs have inherent redundancy, it comes at significant complexity to manage coherence and recovery.

More unique to our use models, with LucidLink each client only maintains fraction of the data, just the files you cached or pinned. Only the central repository maintains a complete set with required redundancy. Doing the same is harder in a P2P network. Either nodes maintain the full set, which explodes storage capacity and cost, or if each node only maintains the required working set, a lot of complexity goes in to finding which node has which part of the data, and even worse, making sure at least 2 or 3 nodes maintain a copy of any given file for redundancy. And if one node is low on storage space, it can’t just purge a cache, if no other node already has a copy. It quick gets out of hand. And if not all nodes are up 7x24, a file that exists in the network may not be available at all times when you need it.

So if you added up all the cost of both options, and if all infrastructure were actually priced at cost (with no beneficial averaging and no milky markups), and if you add power consumption from all nodes remain up more than just work hours, I’m not sure P2P would come out ahead. Also considering the internal IT deployment and support cost - even if it’s your free weekend time.

Add to that the complexity of deployment, and that there is no service provider who maintains the system or can provide support for users.

So P2P is may be attractive for the Finns and Phils of the world depending on circumstances. But for most everyone else a central cloud tool is the better choice, as evidenced in the rapid adoption of LucidLink and similar tools. Now we just need them to be good partners instead of greedy startups with ambitions.

I think P2P has a deceiving allure because we’re attracted to solutions that keeps us in charge of our destiny, and allows us to compensate with elbow grease for hard costs in a volatile market. We generally suck at valuing and accounting for our time properly, because we love what we do, and have always lived on the leading edge of what can be done.

All these considerations are under the assumption that you work with high value data that needs to be maintained and available at all time, and where the same file may be modified by more than one node, rather than individual nodes just contributing to the pool, but pool entries become read-only once in existence. It’s a different story in opportunistic file sharing like the early Napster days, or other resource channels. Here you benefit when it works, but don’t fail if data becomes unavailable.

@allklier - eloquent as always.

I agree with much of what you wrote except my choice of opting for P2P.

I don’t.

I’m infrastructure agnostic, and apply best guess practices for incomplete decisions.

How I worship at the pixel altar usually depends on who’s building the church, and how deep their pockets are.

YMMV - Choose your own poison.

2 Likes

Fair enough. Added appropriate caveat. Not meaning to put words in your mouth :slight_smile:

1 Like

Just saying, when we can send footage for remote artists to work locally at their end, we just send it back and forth via Signiant. The supposed ease of working with something like Lucid Link seems to come with more pain points than just giving a remote access to a piece of storage they push and pull from.

We had to use Lucid Link on a recent project and we were underwhelmed. Had quite a few issues with file corruption that we haven’t seen in Aspera or Signiant. Speed wasn’t impressive either. Just don’t see any real efficiency advantage against the good old fashioned push and pull method.

I am talking from a shot based perspective though as opposed to a large timeline. However, If be more inclined to go down a remote Teradici/Nice DCV/Parsec(if Mac/windows)/Jump Desktop (Ditto).

Thank you for the detailed explanation, super interesting to get these thougths.

Perhaps I should clarify some of the things that make me see P2P as the next stage in distributed file-sharing.

I am NOT thinking the nodes are the ones in charge of distributing and maintaining the heavy load of keping the network hydrated but rather a numer of disposable nodes that can expand and shrink at will.

Imagine, for the sake of explanation, that you have a number of AWS nodes always on, all the data avaialble and in charge of keeping all nodes in sync, now, imagine the data synchronised is siloed on a per shot basis for example so the nodes don’t need all the data.

I have not done a single hour of testing these ideas but common sense tells me that this is the only way of dealing with very large datasets.

Interestingly enough, when rendering CGI we are forced to send all our textures to each render node or rely on the server (99% of the time this is the case) putting insanely high demands on the network and server and the main reason we end up paying insane amounts for infrastructure as you need to buy for worst case escenario and we hit that quickly.

I heard of a company using P2P to distribute the material to each render node and although I don’t know what the result was, the idea became too interesting for me to ignore so I do wonder what is the next turn in this distribued approach.

Out of interest, when people are talking P2P are you talking using a P2P network and distributed file system across multiple locations or within the one location?

I know of places using distributed file systems over multiple locations where the infrastructure is substantial, there are dark fibre links between locations and it works a treat but at a cost. Global namespace makes a whole lot of things way easier. The thing is though, the organisations using this are either finance, universities or government who have the money to do it properly, I don’t know of any post production facilities using a setup like this. I love the idea of QuoByte (awesome), Ceph, MooseFS, etc; and it wouldn’t be too crazy for a larger post production house to use a distributed file system and have P2P networking over speedy connections and for it to work like a charm.

However, if you cheap out on it, surely it would be more of a pain than a help without it being able to live up to expectations. The idea of it is frigging awesome. I had asked the question of if you could theoretically link up a whole bunch of workstations with internal NVMe but use a distributed file system across all of them if you had a high speed network. Seems like the overheads on each system wouldn’t be worth it but I’d love to see someone try.

Exactly. At the top of this decision tree is the first step:

Eye balls to the data vs. data to the eye balls.

Generally speaking eye balls to the data has less issues and more solutions to pick from that provide reasonable performance. At least in edit and vfx. It create more complications in audio work.

With these solutions you avoid large transfers and all the data can live in traditional setups that integrate with existing workflows for consistency, redundancy, etc.

It works really well in steady setups. In cases where you rapidly scale up/down with freelance talent it creates more friction and requires more capex for the systems in the central location, unless you go the cloud route, which is most flexbile but also very pricey.

Yes, what I described in the last post was a model where you replace Lucid Link with a P2P solution. In case of shot based work, CG renders, etc. you take a read-only file and send it out to the nodes, and then you get a new read-only file back that you add to the repository. That is a lot simpler architecturally. Same with shot based work. You send the artist the setup in a read-only package, and get a render back that is read-only to be added to the main project.

For cases like that Resilio and other P2P solutions are great. Even if the setup gets changed with a client revision, the revision travels in the same direction as it’s ancestor and can simply overwrite, which those P2P solutions are well capable off.

Which gets back to how I started my last post - it all depends on the use case and what you’re trying to optimize for. There are use cases where P2P is an easy win. There are cases where remote access is an easy win. And then there are complicated setups where you really need an Lucid Link kind of setup.

I’ve also heard stories of Lucid Link being less than satisfactory. As a company they don’t win me over in how they run things. But that’s probably only a small piece. As a technology it can totally deliver a solution. The question in the situations where it didn’t work so well is two fold: while Lucid Link is a good solution, it’s not sliced bread. So where the expectations appropriate, or did those folks expect plug-and-play miracles? And LucidLink is a complex technology. For it to be optimal, you need to understand cache sizes, local hardware considerations, network throughput, cloud regions, cloud storage classes, etc. A well tuned LucidLink solution with proper IT support presumably can run with the same results as most of us get from Frame IO. 99% uptime with acceptable / non-offensive performance.

But LucidLink also appeals to the tech challenged folks with its relative easy deployment. And it will certainly work, but not miracles.

If you need LucidLink style data transfers, but just need it to work - solutions may exist like Hammerspace, who presented at the NY FUG a year ago. I don’t know all the details, but those are the folks that take cloud storage and take it beyond LucidLink plug-and-play to truly make multi-region shared storage hum. At a steep price though I imagine.

For our business, we’re using remote access. All the data stay here on high performing hardware, and we use Parsec for remote access for everything but audio. Audio work is always local. Works well for us, but there’s only two of us.

Anecdotally - in a job I just wrapped yesterday we were relying on a combination of VPN and P2P sync (Resilio) to a central storage server at the client’s office. It was not quite shot based, but there were some shared Nuke setups, but few artists so coordinating who had the batton was feasible.

In the beginning the P2P sync kept up, but once we neared deadline and needed rapid revisions to address notes, the sync kept slowing us down so that we had to circumvent it. And then towards the end a software update on the central server broke everything and we had to go back to G-Drive, which also worked, but created a lot of manual steps and coordination. No less, because I refuse to actually use drive sync with Google or Dropbox, but rely on either the web interface, or a tool like GoodSync to copy files to/from Google Drive. We made it, but it was a pain point. Lucid Link would have worked way better but the client’s IT department is refusing to install it.

(ps: these were actually two jobs with the same client, I just combined the description for simplicity).

Suite Studio appears to be running on Rocky Linux 8.7. Local site-level cache of 80TB on local NAS for 10 users is a nice bonus.

The External Transfer Upload/Download is also a nice feature.

I also like the list-based Path and % Cached menus in the control panel.

The Suite Connect ingress/egress app needs some love, like throttling and reordering uploads. Might need to keep using MASV for that.

we need globally defined mountpoints, have you used suite in production? we couldnt yet due to that missing feature,

i will say its much faster than lucid

Yeah, I thought I needed that too. But, I think I can make it work without.

Mac auto mounts to /Volumes/Suite

Linux mounts to media/Suite, so, sudo ln -s /media/Suite /Volumes/Suite on the Linux boxes and switching the Logik Projekt symlink from Lucid to Suite an Bob’s Your Uncle.

Windows. Fuck windows. :slight_smile:

2 Likes

its more for me that i am transitioning from my lucid mount which i previously transitioned from my local nas so its all the same since we startes the company making restores of projects a breeze, but yea at this point I am heavily leaning towards suite, we havent had the time to
really
rock
its boat and give the studio cache a shakedown but dude if you tell
me this all
works fine in prod , i am legit game

well ill be the gunieapig here but we are doing suite studio now, ill be moving my whole infra over and see what happens, cant be worse than what we get with lucid right now.

The site - cache is the best thing since sliced bread not gonna lie

1 Like

$75/TB/month?? Isn’t that worse than Lucid?

Lucid is more like 80 a month and they will increase this further … next year .

I still don’t understand the advantage vs expense equation in solutions like LucidLink or Suite Studio. I get that it arguably makes things easier but the cost associated?!! They have a calculator on the Suite Studio and the maths they were coming up to tell you how much you would save were humorous.

A properly implemented remote artist workflow still seems a lot more cost effective and more secure in my mind.

1 Like

Not anymore. Lucid Link 3.0 rolls out a new pricing structure. $27 per user per month. Each user adds 400GB to a shared storage pool. Overages per GB.

The $20/TB/mo Wasabi tier is done and is only available on Lucid 2.xx workspaces, of which there is no migration plan.

It’s all kinda shizzy and entirely suboptimal.

1 Like

i find it very difficult to have people match my pathing manually on whatever setup then drop stuff there, pull stuff from X to Y, its gets so messy and in commercials its like “hey can you just do cleanup of these 6 random shots” i dont have time to collect media, send it over , then receive it , put it back, oh wrong paths so I cant open their scripts/batches whatever, ugh. no i dont want to go back to that.

We are actually hybrid so most users are on machines local to our network, but some are either too far away or we just cant scale it up as quick as we would like. (as in buy machines)

also producers LOVE it because now they can actually run around, pin the dailies folder and just watch stuff - fast wherver they are on any project.

Matchmove and roto just drop stuff onto our Lucid, the workflow itself is insanely liberating.

and I can just drop stuff in there and have a remote VPS crunch out dailies for stuff or do whatever anywhere around the globe. its honestly great as a whole concept at least for us.

inplementing file replication and push/pulling of stuff has been very… meh and very , slow not in transfer speeds but you have to actively do it which sucks.

1 Like

they still so per TB pricing for enterprise, i had a call with them we are talking close to $100 per TB on AWS.

We locked in 80 per tb with no ingress or egress but the cutoff was Wednesday for that deal…