LucidLink Rugpull

Word. It works surprisingly damn good.

Tailscale with Screen Sharing?
So Good Hot Ones GIF by First We Feast

1 Like

Got the nvme NAS, 25Gbit , all is well and happy and fast:

way more responsive than lucid even when pinned, idk what macfuse/fuse in general is doing with all that virtual filesystem crap but its not “responsive” , this new nas feels faster

all is well, we are now 2-way syncing each project with GoodSync to lucid for externals, but again fuse is not great for this as it doesnt seem to tell the OS if a file was changed so the auto sync-on-change does not reliably work, the response I got from goodsync support was

" yes, this file system is broken, do not use it."

In regards to macfuse.

I will try this again with running goodsync on linux with the more “native” fuse next week.

if anyone has been sucessfully and happy syncing large datasets (10Tb ish) between onPrem and Lucid / please let me know how :slight_smile:

LucidLink does a lot of processing under the hood, as it converts everything to it’s own blob and managing that, plus it encrypts everything along the way. Conceptually a good thing to enable the selling points of Lucid, but then you need top-notch code to keep this efficient at the same time. Their track record seems mixed on that front.

Regarding the sync - how is your cloud storage used? Are there ever two people who could be modifying the same file and you need a mediator for that, or is it more like plate folder you publish, and then remote freelancers publish renders as returns. But each file is only ever changed by one person?

If the latter, any cloud storage would do. No need for Lucid in the middle. Just have GoodSync sync to an S3 bucket and pick the appropriate storage class that meets your needs/budget. That works fine for me.

I just found the second parameter in the S3 module of GoodSync where you can change the number of parallel threads for large files, so you can get past the latency induced transfer limits. My default are now 10 parallel files, and 10 threads per file. With that I can reach up to 80MB/s on a 1Gbps fibre link, or get to ~75% saturation of the link with local region on Wasabi. AWS will be a bit faster too than Wasabi.

The parallel files is in the job options, the thread count is in an odd place - when you configure the S3 destination you need to click on properties, then advanced and it’s buried there.

Also GoodSync allows you to configure some barebones versioning. If enabled, instead of overwriting a file, it will move it in a hidden subfolder for like n versions deep. So if two people changed the same file by accident, you don’t necessarily loose the original.

2 Likes

Yea , so my reason for syncing with lucid is that this ends up having just one point of 2 way sync in the chain and not have clients 2 way sync stuff as well as that can lead to more issues than just having a single i/o point.

it also makes onboarding so fricking easy, that we will probably keep using it for external freelancers and producers they all love the simplicity, for as long as they let us be on the wasabi tier…

Producers running around on wifi just cache the dailies/output folder and they can playback and QC large prores files with ease, for these things its very very great.

If 2 people change a file at the same time we have a scheduling issue, as you said its more published shots thats are done by remote artists.

I setup all the conflict resolving - seems to be doing what it should so far.

So yes I could 2 way sync everything to /S3/cloudflare but then every client would need to :

) install and license some tool to sync S3 to their workstation
) Symlink paths around
) Know what files they need to sync
) make sure they dont accidentally load in stuff from the original un-simlinked paths
) once done with a project manually disconnect from the 2 way sync then delete local files.

while IT knowledable users like me and you find this to be super duper extremely easy i know that many struggle with the pure concept of how files move around and absolute paths and 2 way syncs - especially in the flame world as many are used to working with managed media and stuff like that.

Lucid cuts down onboarding and complexity by a huge margin, they can just install it and work, this makes it worth the $20 a TB but not $96 a TB imho. I also makes permission management extremely easy where producers can just give people permission to whatever, dealing with this with a wasabi bucket is another nightmare.

So yes you can make it work, but so far for my usecase the ovrhead of dealing with all this filesync around is not warranted as we still have the cheap wasabi tier.

1 Like

oh and I have to look for that thread count might explain why i only got 2 threads to dropbox when testing … even if I was bumping up the paralell transfers

Wat did you end up going with chassis wise?

1 Like

Funny. That was exactly the one I was eyeballing.

Confirmed.

1 Like

I think this is what @mybikeislost is rocking…

1 Like

image

This morning. GoodSync straight to Wasabi on 1Gbps fibre link. Better than what Frame IO gives me most days.

2 Likes

Deopbox is rate limiting api calls so it doent let me push up data to that fast at all and still only uses 2 threads.

Yea pushing to cloudflare R2 is also fast with GS.

That said 2 way sync is a problem in general, with growing files and stuff writing temp files and whatnot but GS seems to have not such a bad grip on things.

Maybe I should try the P2P block based transfer they have, reminds me of resillio but maybe its faster …

Close! I’m rocking the QNAP TS-h1290FX which is a desktop friendly take with 12 bays of U.2 NVMe. Base version, but 256GB of 3rd party ram. Super quiet, so much fast :upside_down_face:

2 Likes

I went for the 24 bay for future upgrades - however its SUPER loud, easy the liudest appliance in my
rack by a long shot…

even more thsn my big proxmox beast which is pretty crazy if you ask me

We are currently testing some Lucid link options for our Flames and Resolves.

Issues we’ve come across

  • pinning files (caching) is too slow compared to just pushing files across via regular FTP - even FileZilla is faster, or just MediaShuttle, MASV, WeTransfer all currently beat the speed of Lucid
  • Resolve loses its thumbnails so colourists are effectively working blind (not ideal)
  • works okay on Video files (movs, mp4s) but we’re losing random frames in file sequences which then breaks the whole shot
  • conforms are incredibly slow no realtime playback at all

I currently have no solutions for these problems other than the suggestion to work on proxies which makes me want to bang my head in the wall. I’m looking at you wise people what experiences you’ve had with Lucid - good and bad, hit me
john travolta GIF

2 Likes
2 Likes

Thanks! I’m kind of reading my way through these. If anyone wants to add anything more, this is an opportunity to rant!

Has anyone used Hammerspace?

They demo’d flame integration at a couple of flame user meetings.

1 Like

From what I could tell this is more geared towards people running multiple cloud based things at multiple locations, like its super cool.

But I think its $$$$$$$$ Enterprise stuff , isnt it?

  • We had to pin the whole project we worked on otherwise performance was utter garbage. we had to add 8TB cache drives to every machine

  • Upload/Download speeds where fine until the rugpull with wasabi Amsterdam at least- are you on wasabi tier or on IBM/AWS? Are you on the classic or lucid 3.0 ?

  • havent seen any thumbail loosing ,thumbnails are usually saved locally ?

  • we havent seen any loosing of files in a sequence , we have seen random local cache corruptions however which had been not that much fun

In general it had been OK until it definetely took a big dive in terms of performance and reliability with their introduction of the unfinished LucidLink 3.0 .