LucidLink Rugpull

sudo ln -s /mnt/NAS /PROJEKTS
sudo ln -s /Lucid/Shared/Jobs /PROJEKTS
sudo ln -s /Volumes/TrueNAS/Projects /PROJEKTS
sudo ln -s /Volumes/PortableSSD/Stuff /PROJEKTS

cd /opt/Autodesk/wiretap/tools/current
./wiretap_create_node  -n /projects -d MyProjekt

rsync -av /PROJEKTS/MyProjekt /mnt/NAS/Backup/PROJEKTS/Myprojekt
1 Like

This is the jumping-off point of my reverse users group diatribe → https://youtu.be/14mLpW2O1Wc

2 Likes

This gets root level shit going but Flame needs to get to a point where we can shift large portions of diverse data around based on variables/tokens evaluated in specific contexts.

That requires folks illustrating to other folks how such a system can work.

…and that’s not to say this isn’t important or viable now. Absolutely the contrary—it just needs to be extrapolated, contextualized and made significantly more granular to create powerful workflows operating in modern pipelines.

1 Like

I tried that - some people are receptive - not many.

1 Like

We drag them, kicking and screaming brother.

1 Like

times are changing, paradigms are mutating, who can tell what is on the horizon?

one of the complications with tokens is that, cough, flame tokens are not universally available in every flame module, nor are they always literal.

for example, ‘background index’ is actually <background segment> - if you didn’t know that things aren’t always called what they are (why would you follow that logic?) you might spend a heartbeat or two looking for the corresponding ‘background segment’ token.

life is short.

openclip can get all kinds of messy when paths are inconsistent.

that legacy flame island cached framestore looks appealing again, right?

it sort of makes me think about climbing to the front of the titanic for a few more minutes of air…

a complete, native tokening system would be useful.

a customizable tokening system would be great, but only in so far as we’d make 500 versions of the flame learning channel:

“take the banana and stick it into the tracker sandwich where it will become peanut butter for your temporal lunchbox”

i expect that someone might soon invent a dictionary for most PyMediaPanel objects and then wrestle that monster into submission.

still, you know what they say about breath and holding:

bring self-contained underwater breathing apparatus.

1 Like

That is spot on!!! This is exactly the sort of thing I had in mind when I suggested the above.

Great minds hey… except yours is great and mine is meh!

Even a broken clock is right two times a day.

Cheers @AdamArcher

2 Likes

apps reliance on fixed paths is plain stupid.

for some reason nobody has thought that we will ever need different paths across machines.

a file based project is almost always superior to a database.

Houdini got this locked in, with easy to use variables like $HIP,

Resolve has very nice path translation.

flames path translation is not good.

resolve has actual multiuser collaboration so a database makes sense

Flame has no collaboration, and dont tell me
“shared libraries” is collaboration, thats a shortcut to manually sending pieces of project back and forth, dublicating data which turns into a hot mess.

Single point if thruth is what we need.

so yes in the end

Flame is bad for these kinds of workflows, and we have to find very “heavy” solutions around the deficencies of flame to make it behave.

we shouldnt have to, flame
projects should be textfiles that you can version like nukestudio

Batch files should also be like nuke files.

managed media has to go away.

I see no advantage to the way flame does it, its overly complex, bloated and annoying, giving us no performance or usability advantage.

No matter how much I try to teach flame arists new workflows with openclips, unmanaged , dwab , its hard to teach old horses new tricks, ever job i will find random
precomps done with rendernodes… no matter how often I try this.

Maybe its time to look into the future and use something else.

To be fair though,

Resolve’s multi user collaboration tools have lots of flaws to the point of being unusable.

Resolve’s path translation is not without flaws but at least it is easy to change a whole lot of files quickly. Camera Metadata is limited like Flames.

Resolve’s cache system is and always has been buggy and flawed. I do have a good workaround for it so it acts a bit more like Flame but it is still a workaround. Render in place is convoluted.

Fusion Studio is incredible for the price, but doing some simple things is quite convoluted compared to Flame or Nuke.

Houdini’s filepath system is awesome!!!
Houdini Copernicus has potential but isn’t there yet. You can’t conform in Houdini.

Nuke Studio is Nuke Studio.

Baselight is great… for grading. Try adding a dissolve.

Adobe CC is Adobe CC and who wants layer based compositing.

Assimilate Scratch is actually pretty impressive and I want to reevaluate it but you still need to send comp shots to another piece of software. Not too many people know how to drive it and their focus seems to be on dailies and VR.

Mistika showed so much potential and was crazy impressive when I first saw it. Mamba showed promise too. They too went down the VR route. The interface was just plain weird though and not intuitive. Once again, a lack of talent that know it.

Sillhouette is also crazy good and does what it does well, which doesn’t include a 3D toolset or conform or a whole lot of other things.

I think you get my point. Don’t worry, I totally get and share a lot of your frustrations with Flame but there isn’t necessarily a standout alternative option right now. There are so many ways I think Flame could be a better piece of software but I’m not sure we will ever see it but I’m not seeing it anywhere just yet unfortunately.

That is a hold-over from time where these apps were turn-key systems, and didn’t have room mates or couch surfe like everyone else today.

Mistika and others have the same bad habit for the same bad reasons.

A form of tech debt that takes time to tackle. You need to scream a bit louder if you want it fixed.

pretty sure ill end up as the main dev ob a obscure finishing tool that i will
opensource at around the age of 40 as my personal midlife crisis.

5 people will use it and will non stop complain about its bugs

It will be amazing

6 Likes

I’ve found the key to happiness is to lower my expectations. Try it.

4 Likes

@randy - hope and ambition are easier to swallow when they have been crushed…

Harry Potter No GIF

super anecdotal stuff with lucid, everything was fine and dandy

Now we are plagued with corrupted caches on some machines .

Meaning that a random file will show as broken on one or sometimes 2 machines but not on others. Only fix is to reset the local cache (move cache to new location, restart the client) this has mostly plagued our transcode server that creates automatic dailies, been talking back and forth to support but they are pretty much ghosting me at this point.

Not cool.

My guess is that with the rush towards LL3.0 engineering resources for LL2.8 were starved and left with breadcrumbs. Decreasing bandwidth and QA cycles for recent updates which introduced bugs.

Not unusual for software that falls into neglect because management has lost interest and prioritizes other glitter instead.

Shameful for a service provider of critical infrastructure. Does not help them be trusted with their new toys.

As with anything in life, there’s a bell curve and some people serve a purpose occupying the lower sections of the curve, and a few even the area close to zero. It’s a reminder what it looks like and why we generally prefer the other end of the curve. It seems like senior management at LL has found a home in that space at the bottom of the curve.

1 Like

As you describe your issues with the LL cache, I’m starting to wonder if the ‘Wasabi incident’ was really actually an LucidLink problem, not a Wasabi problem, at least in terms of the broader impact.

This is the description:

‘some objects on these disk are no longer accessible’ - reads like, yes some data loss has occurred but seemingly on a very localized scale. Like one RAID enclosure lost some data. Should only impact data that were stored without replication.

Yes, we’ve heard from folks continuing to experience significant reliability issues (cache corruption and otherwise, you’re not the only one).

That seems like a problem that sits in the layers on top of the storage provider and in the application logic between different nodes of a file space not interacting with each other properly.

Another self-goal? The seem to do that…

1 Like

sure it could be wasabi having issues if the data pulled into the cache is corrupt … man idk

I pay good money to make this stuff other peoples problems

has anyone heard about anyone on wasabi loosing data except for LL customers

3 Likes