LucidLink Rugpull

fwiw i dumped a lot of stuff onto suite and apparently my router is not happy with whatever suite is doing upload wise, gotta say their support is crazy good they really tried to find out what causes this behaviour but so far no luck.

Causes insane latency spikes even throttled to 1/3rd of my total bandwidth… to the point where using parsec/junpdesktop becomes impossible. unthrottling it actually increases latency so much my router thinks my WAN died and wants to failover to 5G …

it’s certainly not every case, but many large firms pay for dedicated VPN and let the VPN provider manage the bandwidth to and from the internet.

it’s not easy or cheap.

you might even describe it as eye watering - like sitting on a cactus in church.

We have about 500TB active data. In Lucid/Suite Studio, that would be minimum $35K/month. For that, I could have 2 fiber internet connects from different providers and buy 2 top line workstations, have it all on Teradici, every month. That would be way more productive than doing some bastard localized data workflow.

The fact that this huge thread exists, demonstrates why the Lucid style workflow is no longer viable, and is a remnant of COVID past.

5 Likes

sure, its not a bandwidth issue, we pull similar/more bandwitdh through lucid and have nonissues with latency spiking… so idk

It sounds like your ISP is automatically throttling unusual traffic

I think you arent wrong but also not compeltely right.

If I would run uncompressed or PIZ workflows with 4-6K media - i would look at a similar size of stuff.

However - i have managed to keep our active project size below 10TB for the last year, it wasnt easy - it requires a lot of workflow changes and “Digital householding”.

and i wouldnt put archive data on cloud storage thats insane, idk who would pay wven $20 for a TB of cold sitting data

We run DWAB, we dont store any raw footage on our hot storage, we dont keep old renders for long etc.

Hot is ~ 10T
Cold is ~ 50T (thats local on 2 NAS systems)
Archive , no idea would need to count LTOs but its a lot :joy:
Raw-footage-dumping-place aka ingest nas: ~ 100T

makes everything nice and tidy and small and manageable , aka efficient. I dont want all this weight weighting me down.

That said - id rather trust a Nas i build a deployed myself and have a fleet of workstations that I manage than to trust some SAAS provider to not crap its pants so thats that.

1 Like

@Alan - many people here are not rocking 1/2PB volatile annually, let alone monthly.
But I love that you are…

i might just have to pull prods tooth and remove their local macbook access to storage, lmthey absolute love that so much to be able to play a qt daily on their macbooks, its been so nice for them.

They are all remote. using wifi in the kitchen, good enough to pin a few hundret megs of stuff onto their desktop - not good enough to use parsec/teraduci and get actual good playback.

would I rather run a badass nvme nas and 25/50Gbit? yes.

1 Like

@finnjaeger - maybe Father Christmas will dump an old school sack of bank robbery for you to launder…

1 Like

i allready checked a SGI onyx does not fit through my european chimney

2 Likes

@finnjaeger - you could use it as a house, or at least a bar…

100% with @ALan on this but understand it is horses for courses so my experience may not be the same as everyone else’s.

One thing I would say is a good pipeline tool could also supersede a whole lot of the issues that are trying to be fixed here too but that is a whole other discussion.

A push pull scenario where you provide the storage pre named and have file paths to match really isn’t expensive or onerous either with a small amount of training, a good folder structure and a small amount of preplanning. Way cheaper too, that’s for sure!! Oh, and you’re only push/pulling when you’re ready to, not every time you are rendering something with some background service doing the pushing/pulling for you.

1 Like

Oh, and whatever happened to listening to what @ALan says as he is usually right @finnjaeger ?

:rofl:

2 Likes

Two things comes to mind:

The idea of a Cloud NAS (LL’s original promise) is actually pretty cool. Kind of taking what you love about your Synology, but place it in the cloud with the same ease of use for those times where you want to share files with team members or even just yourself when you’re not at home base. I know there is G-Drive, and Dropbox, etc. But they all stink as far as I’m concerned. I want a no-frills, virtual filesystem to cloud storage, metered storage pricing, not some awful tier I have to commit to.

Secondly, I’d love a cloud based storage solution that makes it very easy to setup a pipeline with other artists you’re working with. I want that solution to focus on features and functionality, not be a reseller of overpriced storage. I’ll bring my own storage, thank you very much.

The old saying we had at Amazon (during the good days) was ‘I’m looking for the person that gets out of bed every morning and worries how to make x the best it can be’.

I want that person to build an easy to use and reliable shared cloud storage solution that can deal with a variety of workflows, not just basic folder sync. It needs to take advantage of available bandwidth and caching infrastructure to make it usable. And keep that a stable ecosystem, not chase new stars every year with version 3.0 and screw everyone still on 2.0 like LL.

While in many cases it makes sense to bring the eyeballs to the data, it’s nice to have an option to do it the other way around that works, isn’t a rip-off, doesn’t lock you into stupid setups. Just make good software. Plain and simple. We can take care of the rest.

1 Like

Essentially @philm’s open clip/intelligent workflow is clearly the way to build a project that is flame centric and reliant on something like LL
Their tech deliberately acts ‘dumb’ and maximises file sizes with copy’s etc.
We did not have time to implement and educate everyone into what i know is a great pipeline from Phil.
Ll. Had a great product and jumped through all the hoops with cerification within our industry but now feels like vfx is not really a consideration.
Capitalism tells us that someone else should shortly pop up to fill that space in the market?

1 Like

Publishing and open clip workflows don’t require something like LL to be effective and lean.

Conversely a 500T nas with 25gb to each client Flame and a redundant project server implementation and can be insanely ineffective and wasteful.

Storage is a means to an end not the end all be all. This thread is actually starting to confuse the actual issue. At its core, there is a gaping deficiency in how Flame projects are functioning and media is managed and if more effort was spent in this forum reaching some kind of consensus on how it could be achieved, then impressing that common viewpoint upon the people who are in a position to make those changes as a unified voice we could stop dancing around like this.

3 Likes

@cnoellert - preach brother

1 Like

This is the discussion that should be had.

2 Likes

If anything this thread proves that people need other solutions than traditional nas deployments and fast wires.

4 Likes

I’ve always thought it would be handy to be able to setup a Flame project based home filepath. Could even be a token in a filepath. Each system could then setup its own home filepath.

So on one system it might be:
/mnt/NAS/ProjectX
On another it might be
/Volumes/TrueNas/Projects/ProjectX

If there was somewhere in the Flame project to setup a custom home assigned to a token (let’s say “home” for this example. Then for Flame your write filepath might be “home”/shot_010 and what you do will end up in the correct place after that. You could even have something for open clip renders that could work to put the shot exports in one place, the open clip file in another, your batch setup somewhere else but whatever system you are on, once you transfer that structure it will just open up.

I had to use “ instead of < or > due to HTML

You can also explore the path translation features in Flame as well.

Best option yet, build a pipeline tool or chat with @philm about Logik Projekt as there are a whole lot of features in either option which would make remote workflows so much better.

I still agree with @ALan though on making the interface remote instead of the shot data. We work with studios who won’t let us send data out of the building, or work in DWAA/DWAB which kind of forces our hand a lot of the time. Once again, horses for courses.

This thread needs to be framed within a particular pipeline though as there definitely would be better ways to do things depending on your use case and budget, and this will change substantially depending on your circumstances.

1 Like