Lucid Link and Flame playback

I’m testing Lucid link on a handful of physical machines–linux and Mac OS–cache locations are set to NVME raids with more than enough bandwidth for full res playback.

I’m running up against a strange issue. On Mac OS X, once a sequence is cached from the filespace to the local cache, Flame happily plays the sequences back with no issue. You can version openclips and as quickly as sequences are cached they play. There can be a moment of lag as the files come down but by the second loop it’s all realtime with no caching and no caching on playback. Pretty smooth.

The issue that I’m running into is actually on both 8.5 and 8.7 Rocky, where the player will not play back in realtime almost ever and then I’ll randomly get 5 loops in realtime before it goes back to not playing correctly. This is despite the frames definitely being cached and the performance of the cache filesystem being capable of realtime playback. At first I was convinced it was a config error on the linux side. I double checked the installation with a tech and its setup correctly. We verified FS performance with their lucid perf tool. It’s all green. Next I checked the workflow through Resolve–import a DPX sequence from the filespace, throw it on the timeline and hit play, which, of course had no playback issues at all and happily hit realtime on the first try. So it’s just the Flame timeline which strikes me as pretty strange given it’s an approved storage solution.

To add to the strangeness, if you push play in batch, Flame seems to happily push through the frames without strange pauses, hiccups or blue and green bars on the player timeline. Resolve plays ball no problem on both Mac and Linux, and Flame plays ball on MacOS so what gives? Anyone working with Lucid?

@fredwarren and @Slabrie is this a support worthy pursuit?

I gave up on Lucid. Once I upgraded to Rocky, I couldn’t import anything cached, nor could I cache the media once it was soft imported. I would need to run it through a dummy colour correct. Add to that, strange unexpected performance issues and it was a no-go. Export worked fine. We contacted Discreet, but they didn’t seem eager to take on the issue since it was not a “flame” issue. I now download from Lucid with my Mac side piece and import from there and never connect to Lucid. I’m grateful I’m a short form guy.

For what it’s worth, the editorial department uses it constantly with little issue.

I haven’t run into this particular scenario. You could contact Lucid support. I had to work with them on a different issue a few weeks ago, and we ended up on a Zoom call and it was solved pretty quickly.

1 Like

Support’s been pretty helpful, but since it’s technically working we were at an impasse.

That being said, I figured out what was going-on I think. Since I’m using a Systemd to launch Lucid now as opposed to the lucid app on startup, the original config options like data cache size and location were reverting to defaults. I’ve set them again using lucid config, checked that they are persistent between restarts and am running through the paces again.

Rocky 8.7 seems to be working now as well as MacOS. 8.5 is still giving me minor hiccups but nothing that couldn’t be attributed to repinning the project folder on a data cache location.

Resolve playing back was a red-herring–it was caching the frames at playback I’m guessing.

2 Likes

It’s my first time out the gate with it.

I have to say, it fills a very specific need in the unamanged project scheme and if there were a possibility to have local sync-endpoints it would be damn near perfect. My next experiment is an @ALan style project server using a Lucid filespace shared framestore. Should be interesting but if it works, all editorial and shot data will be on volumes that can be consistently snapshotted and rolled back. Then just need to deploy a VPC for the project server so I can do the same with it.

3 Likes

Happy to read you have been able to make this workflow work.

2 Likes

Hi @cnoellert

A couple of weeks ago, Iinstalled lucidlink on our flames (still centos); I never tried the UI, were just running via systemd. I had some back and forth with their support tech. I’m copy/pasting some parts of the email convo in case you are testing – maybe it will be useful to you:

Yes the cache change setting is stored in the host that has your LucidLink Filespace service under this machines Node ID or if you used the glbal switch it is stored for every machine and is persistent.

let me explain some more on how it works, the cache is in this case just processing inbound I/O requests and outbounding them by uploading to the Cloud Storage, the same when downloading. So if your network is able handle the eviction of the I/O faster that your cache can be filled with new I/O then the 5GB setting will work just fine. If your network is slower than your cache is just acting as a buffer for new I/O being copied to cache whilst awaiting until the network has evicted the existing I/O. But by all means you can bump it up but don’t fill your SSD right up as the SSD drives perform badly when they are getting over 75% full.

The other setting relevant is the Maximum Upload and Maximum Download connections, the default being 64 each.

You can monitor this with the commands below and you can look at the fields called Activeputs "uploads and ActiveGets Downloads. What you want to see is all the 64 active puts or active gets being utilised.

If you see consistent active puts at 64 for example bump up the number to 96, then see if you can see activeputs use all 96, then if consistently use 96 try 128.

In terminal type
lucid perf

And this one to see how much cache is getting used, note you can change the size of the cache on the fly as well but only for making it larger whilst your uploading. Wait for no activity before lowering cache size.
lucid cache

You can out put lucid perf to capture for review.
lucid perf > /tmp/lucidperf.out

2 Likes

How is everyone using Flame with LucidLink? Are you just using it as external storage to sync unmanaged stuff across multiple locations? Or are people putting their project servers and framestores on it? It seems like that might work if everyone’s internet and cache drives were fast enough? As someone who can’t get symmetric internet at home without spending huge amounts of money it intrigues yet terrifies me.

1 Like

We’re testing at this point but it’s largely about connecting disparate locations with decent internet and reasonably speedy nvme’s into a production patchwork quilt. The idea is unmanaged workflow, dam via shotgrid and other dcc’s where the cache drive you might normally use as a framestore also acts as the cache for Lucid. Pin the project so that data fills up the cache as created and it’s kinda fucking brilliant.

Does it require speedy internet access? I guess, but then it depends too on how you approach things. I’m running 5gig fibre at home so it’s kinda perfect. Someone running 500megabit might have a different experience but honestly it’s hard to know for sure because of how things function.

I set it up with a shared stone directory as well and it works exactly as you would hope. In my test bed I made a project on a M1 and then had two physical flames, all with Lucid throw timelines around via shared libraries, render and version unmanaged media (also housed on Lucid) as well as create rendered managed media in the timeline and batch, and it all just worked. Sometimes there was a little lag as thing copied over but then it was as speed as if it was local. It’s wild. I started the process of rolling out a project server ami to test a hybrid cloud/physical structure but I’ve got some real work to do tomorrow so it will be on the back burner for a week or so. I mentioned elsewhere I’d rather go to the dentists office than deal aws…

2 Likes

Why not just centralize all machine in a single location, and have the remote artists Teradici in? Would make things a lot easier.

@hBomb42 Lucid Link is used only for non managed workflow only. See it as a NAS and to get RT playback, you would cache on your local storage.
See these for more details:

1 Like

This is our current workflow, which works great for our flame team but our editors, sound designers, and 2d/3D people hate it for a number of reasons (mostly because some of them are in places like Montana and can’t get reliable playback) and really want to work local. Enter lucidlink. The amount of data they deal with compared to us is much smaller though…dnxhd36 dailies pale in comparison to arriraw!

With the amount of switching artists around we do on finish jobs in particular for scheduling reason, I am skeptical that lucidlink would be anything other than a massive time/data transfer sink, but know that I don’t know what I don’t know, so I figured I’d ask the smarties here what they were doing with it.

1 Like

Yeah it would, but just isn’t possible for us currently for more than a few machines. That may change before too long or it may not… we’re still figuring things out, but being able to add talent with their own kit but who can access our pipeline in a way that doesn’t fuck up any of the pathing, makes the pushing and pulling automatic and allows them to work unmanaged at the speed of their cache volume is a pretty viable interim solution.

This is 100 percent on. I think the core of your job, timelines and whatnot should be managed on those local machines. What I’ve seen though… and this is only unmanaged workflow, since you’re only ever pulling down what you need if you’re not pinning the project in lucid, you still can push editorial around quite easily, if you don’t mind pulling whatever source you currently don’t have.

That’s to say it can work but a better use is those folks in the outer rim.

What’s the reason that we shouldn’t use it for managed workflow out of curiosity? It seams to work as you might expect…

Every freelancer tells us “I have my own flame at home”. and it is always something like an iMac from 2017. And it’s like dude, we work on 6K plates and even our $20K ThreadRipper beasts still choke. Also in our situation, our pipeline is much more involved than just matching paths. If you ever want to chat about Teradici optimizations, I’m available. I’ve been blasting them for the past 2 years to get things in order.

4 Likes

@cnoellert Flame S+W managed media requires file system functionalities that are not supported by LucidLink so that would not work to store your volume on the storage. Also, since we are talking media file syncing over the internet, Flame and its services would be very unhappy in case managed content would not be available.

So, use LucidLink for your source media and published / Open Clips content and you will have a nice workflow. Use one of the qualified storage solutions we have on our System Requirements page so you have an optimal on-premise / high throughput workflow.

2 Likes

One of the things i thought about when using “hybrid” workflows with lucid and onPrem workstations is to use a cache server in the office and have remote clients connect via lucid, this would enable local flames to work as always with no lucid client on the machines.

Now this doesnt solve remote workers playback issues with lucid however.

also works for NFS, now I havent tried this as to be honest, as my lucid tests where all but “amazing” the best thing about this would be that it can mirror your local server paths.

I really think there is a missing piece of software for this kinds of stuff, i just want a normal file sync like dropbox but with my own set paths.

On windows there is the workaround that you can just mount any random folder as a driveletter so you can map whatever dropbox sync folder to a drive letter using the SUBST command.

On Linux this is also extremely easy to just symlink whatever wherever so no problems.

On macOS its all restricted and annoying but it still is possible to symlink stuff.

What I want is a app to do that FOR me on every OS…

1 Like

Curious how this is panning out for you in 2024. We’re all Mac, generally all remote, and while Dropbox has worked fine, looking at Lucid as a replacement.

It basically works like Dropbox, right? Have a finder folder on an external RAID, cache project media as needed for full Flame speed, then uncache and leave in the cloud when you’re done

Slightly different, but generally yes.

  1. Lucid Link manages the cache. You can provide the folder it lives on. But apps don’t see the cache folder, they see a separately mounted /Volume virtual file system. Your cache doesn’t have to be as big as the files you work on, Lucid will automatically grab content from the cloud and make space in the cache. But there can be some latency if the file isn’t local.
  2. To pre-stage the file in the local cache you ‘pin’ a folder, which then primes the cache, assuming you have enough space.
  3. Generally speaking LucidLink works best with a cache on an NVME drive, not a spinning RAID. It works either way, but performance is much better with an NVME cache.
  4. You don’t have to uncache, though you should ‘unpin’ once you’re done.
1 Like