LucidLink Rugpull

remind me 1 year :smile:

I get your point tho of course there is nothing right now that I have to act but id rather think about a exit plan now than later, get my ducks in a row, talking to sales people, making deals and such …

a switch even at our scale is a transition period of multiple months at this point realistically.

1 Like

totally. im just trying to keep myself chill. :slight_smile:
Suite.studio…how does the local site cache work? I swear I can’t find anything…or is my Google Fu fecked?

nah you arent blind, their documentation is lacking, lucid got very good docs compared.
here is a PDF i got from them :stuck_out_tongue:

Suite - Onsite Caching Advanced Configuration.pdf (54.8 KB)

kid fail GIF

ive gotten a quote and we would be looking at something like 100x price increase compared to lucidlink, without any of the nice things that lucid has.

its like… a “better” dropbox but instead of providing actual cloud storage you have to bring your own storage.

I’ve used Resilio in a P2P config, not with their cloud storage.

It works, but has drawbacks. But if P2P works for you, it’s a lot more affordable.

At least one side needs to port forward, and it struggles with EXR sequences in terms of performance. Any tool that doesn’t locally aggregate files into blocks for transfer will be at a disadvantage with image sequences. There are few that do, LL being one.

torrent is block based so it should be fine in theory

Actually try lucid with exr files its pretty slow, I get maybe 100Mbits/ via a 1Gbit line when not pinned so its not that great either.

the expensive tier of resilio doesnt have cloud storage either, its like what you get in the free tier but with a dashboard and wan optimization and different delivery scenarios, the price however is absurd…

2 Likes

We looked at Resillio, got a quote, then looked elsewhere.

3 Likes

same

1 Like

you know whats great

when a file shows as corrupted on multiple clients, you run a md5 checksum and get 2 different checksums, then you restart the machine and get a different checksum …

idk whats going on with lucid but we are not having that much fun with it atm

2 Likes

When a good idea gets corrupted by greed and incompetence.

2 Likes

I must confess that although Dropbox is not very fast compared with other options, their pricing structure and the insane solidity of their platform is incredible.

I have had so many troubles with GDrive that I am thinking on phasing it out altogether and rely on Dropbox instead… at least it does not end up corrupting frames, failing to synch (recommendation from Google, just resintall GDrive), lack of controls…

Anyway, there is clearly a market but does not seem to be a solid solution that is both, fairly priced, super-fast for large files and lots of them and truly reliable.

If anyone finds one, please let me know

1 Like

I had the same thoughts today after anothe rvery frustrating call with lucid .

Lucid just changes their lucidlink3 pricing yet again, now its 400 instead of 500 GB per user on their “business plan”.

They are completely pricing themselves out of this market.

I do have a gripe with dropbox however, one big one, we have lots of mac freelancers, macOS changed it so that you cant use external media as your “sync directory” so you are stuck with the internal mac storage , which is extremely inflexible.

And you will
always into a pathing issue and you have to manually correct for this with some workaround, its just not very suited to how flame works, for example .

but yes the pricing and track record and extra features you get with dropbox are pretty sweet ngl

i wish all apps in the industry would just stop relying on absolute filepaths… how hard can it be to just do a local “project path” variable and call it a day…

None of out tools are truley “hybrid work” ready in that regard, lucid has sort of the best workaround.

I am considering literally just buying like 10 mac mini M4 pros and a nvme nas that syncs with dropbox for producers and call it a day.

I feel like we’re the outliers doing everything on Dropbox. It isn’t the best. It’s a real pain if an artist is using their own flame and has a personal dropbox.

But it’s cost effective. Has almost never been down. Undelete has saved us from several mishaps. Once you’re synced to a job folder, and you have decent internet, it’s really pretty quick to get stuff up and down and synced to everyone else.

If something goes wacky (someone renames a core folder) you have to sit around for possibly many hours for it to re-index.

1 Like

Initial setup of Dropbox you can force an external drive. But that’s your only chance to do it, otherwise it’s making itself local.

2 Likes

keep it simple

define a mount point

make it the same for linux and macOS

repeat with every participant workstation

link your internet storage to the mount point

make a projekt

clone the projekt to your local shared storage (nas/san)

backup the local shared storage to usb disks / tape / dvd
(to sell back to clients or keep on shelf)

backup the local shared storage to S3 deep archive
(charge your clients for 2 years or more)

employ an openclip workflow

cache all openclip versions to your local media cache / framestore

wait for an internet outage

switch to your nearline storage
(your flame cache will not notice)

carry on

wait for internet activation

push changes from local shared storage

switch to your internet storage

move on

1 Like

yea i mean thats all fine and good.

The problem is what cloud storage provider to use with macOS throwing the hammer down and lucid completely loosing its mind

symlinks are just another annoying step , on both windows and macOS with most cloud providers, lucid with its simple “mounts as a drive” thing was always just better, more simple .

Perhaps is worth looking at MEGA, they have a solid infrastructure and although I have not played with it professionally it may be a good option.

They handle enormous files easily and their windows/mac/linux client is battle tested so perhaps is worth giving it a spin.

I still believe there is a hole in the market to put it right once and for all with a decent price but i suspect man of these services are built on top of others so is very likely their pricing model is broken because of that.

1 Like

its interesting for sure

the main issue with all of these is always pathing.

Apple now forces everyone to put stuff into /library/cloudstorage or some crap, completely useless and everyone is angry but you know apple does not care

Lucid gets around it by using a virtual filesystem(macfuse) , downside here is that the performance is just not very good reading/writing from the cache, so you end up nerfing your local ssd performance by a LOT.

I have built my own prototype LucidLink clone , because in the end they didnt invent any of this at all, its highly obvious what they use under the hood, wrap it up nicely, tune it and charge a lot for it so they can pay sales people and marketing.

all it is is basically

  1. some S3 Storage
  2. Rclone with VFS (virtual filesystem)
  3. macFuse to mount the VFS as a “drive”
  4. a bunch of magic sauce to wrap it into a nice app with a gui and to tune everything to be as stable as it is.

Here you can see it working using R2 storage from cloudflare. happy to send people the code to try themselves with their own S3

actually I can just post it here
replaylink_v002.py (13.0 KB)

It works with any storage supported by rclone, you dont have to use S3/R2, but youll have to change the code

run like so on a mac

sudo python3 replaylink_v002.py <bucketname> <mountpoint> \
    --access-key <youraccesskey> \
    --secret-key <yoursecretkey> \
    --r2-secret <yourr2secret> \
    --cache-size <your local cache size in GB>

maybe intersting for @allklier as you switched to wasabi, you just have to change a few lines to make it wasabi S3 compatible and boom see your wasabi bucket content like its lucid.

2 Likes

I’ll take a look.

Recently I was using Lucid more as daily offsite storage rather than collaboration. So GoodSync with Wasabi does the trick at $6/TB and no drama. Offers versioning and immutable storage if you want it. No egress fees. But is limited on read traffic. Really priced as backup solution rather than collaboration solution. If you’re read bandwidth consistency exceeds you’re write bandwidth, they will call you out (which is what created a problem for Lucid Link).

I don’t mind collab storage on AWS S3. My issue is the markup and big bundles Lucid is making out of it. If I can use it at my own metered rate for collab all good. And then archive on Wasabi.