We wont be able to do our next Project on the new pricing plan on Lucid , and I think we have also reached about the limit of what Lucid is useful at doing as we are getting bottle necks with it syncing / pinning when we have 3 of us comping high numbers of different shots and a producer doing sends / transfers too.
Keen to know what ppl might jump to next will keep an eye on this thread.
@Mepstein - are you all sharing one internet connection and cacheing on multiple machines?
Why not do a single mount of LL on a Linux box and serve that out to all the clients over nfs?
no , 3 different locations, 3 different internet connections
I have no idea how to do that haha. We are deep into a project so can’t change anything now. Its gona be fine but will be doing something different for the next project
I think the future is going to go in the direction of Peer-to-Peer filesystems like Hyperdrive v10.
It does all the local management and constant synchronisation in a clever way with other peers, and of course, you can use other services as peers as well so it should scale up massively.
Did you ever tun this in prod?
I tried, it was utter garbage, i took a dedicated machine with a nvme raid and 25GbitE… i had so many weird issues
-
it was slow af, talking like giagbit speeds at times
-
deleting files made ghost files appear , it seems like the storage and nfs server serve different file lists.
nothing but mayor issues with this “solution”
3 machines is nothing - we do this with like 10 artists all day long on the cheapest tier(wasabi) - it had been just fine for us until like 2 weeks ago I feel like wasabi is nerfing lucid or something we had so many weird issues.
so you run IBM or wasabi? (basic vs advanced filespace)
Only tried it with a couple machines and yes the performance wasn’t great, but if the internet line speed is the issue it’s a workaround.
I didn’t see the issues you mentioned but I didn’t beat the shit out of it either. It was just a testbed.
yea i had it running here for about maybe 1 month until we had a shitton of corruption and people where complaining about slowness and stuff
Hey Finn , we are also on Wasabi , def feels like in the last 2 weeks everything has got slower…
yes we all on 1gig fibre internet so that speed is not the issue. It seems Lucid deliberately makes itself as dumb / expensive as possible
They offered me a “archive only” AWS tier that was cheaper , however they where adomant its only
for
archival.
my best guess is that @allklier is completely right and that too many people are using the $20 wasabi tier as shared storage and thus generating high amounts of
egress which makes wasabi sad or
something
I think they may have tolerated it as long as LL was sending them plenty of business.
It’s noteworthy that the ‘incident’ occurred in September. LL didn’t cut off those filespaces until November. You call that fishing for a plausible explanation.
However, the Wasabi cut-off happened shortly after LL 3.0 was announced, which didn’t include Wasabi. So Wasabi probably said - well, if you are booting us, so are we. Deal is off, get your data off our servers.
And then LL was too sheepish to say that publicly, made up a story that anyone any half-way informed person can see through, and is sticking to it. This ‘emergency’ measure has now become permanent and any questions are being avoided or side-lined. These ain’t coming back.
LL has their eyes on big enterprise customers, we’re just a lead weight on their ambitions.
Fair enough. They’re running a business, and they have investors. That’s business. We helped them with their case study and debug the system, and it was a good deal while it lasted. Now we got dumped like a training spouse, and they’re moving on.
But what rubs me the wrong way is not that they’re doing it, but how they’re doing it. It’s very poorly handled, and the communication is misleading and avoiding.
Which is just like the big outage. All the news that it was an external actor. A big post mortem, talking to the FBI. I said back then, if there was never any update after the post mortem, it was a self goal. Sure enough, there’s no update, and the post-mortem got quietly updated. Only someone with good knowledge of their system and extensive access could have done that. The only question is whether it was intentional or just a script bug.
How about ‘The organization has decided to make a change’ (famous movie quote). And then life goes on. Put it out there, own it in all of capitalism’s glory.
The release of LL3 in its current state just has absolutelty red flagged this company for me.
also well said , its full on sketchy whats going on.
Less so the release, but how they did it. This is not a trusted infrastructure partner. Nobody in our business would dream of doing that when it comes to critical infrastructure.
if everyone is going to start self-hosting but has a globally distributed workforce then they’ll need to prepare for the costs of lighting up some dark fiber.
openzfs and ceph can help you with the physical on prem requirements.
there are more expensive options for those with money to burn.
don’t be surprised if amazon tanks all of the “storage providers” in 2025.
in precisely the same way that amazon created ‘amazon basics’ of everything, don’t be shocked when they do it to the storage providers, vacuuming up all of that rich, sloshy business data in the process.
I would be really cautious with this project. No blog post in 4 years. Seems very much like a PhD thesis.
Good point, is a bit odd… but the theory still stands, P2P is the only way forward I think. Meanwhile, it is all about working on a big server.
that has been the promise for a bunch of time now .
see IPFS.
Have not seen anything like that work in the real
world, seems to be super hard to pull off