I am about to start a big project where I will manage two teams in different time zones. I don’t want to host the entire project on my NVM hard drive and share it through Lucid Link. I don’t think I have enough space for this. I would like to host it on a cloud option like AWS, Wasabi, etc, and cache just what I need. I would like your thoughts guys. Have you ever done that? Is AWS s3 the best solution for this?
It’s definitely doable. It depends on a some parameters, and how your project is organized.
There are a multitude of file-sync utilities that understand the cloud storage protocols. So it’s very easy to get a drive and have it mirrored into the cloud automatically. With options of one way copy, one way sync (deletes are propagated), and two way sync. Including preserving changed/deleted file history on the cloud side to prevent accidental deletes.
But these are the tradeoffs vs. using LucidLink. You might decide they’re worth flexibility and the possible savings.
Any of the filesync approaches do not provide file-locking. So you are not protected against accidental overwrites if two people change the same file at the same time. You have to have an offline protocol for that. Depending on how your work is split up and how disciplined your teams are, that may be ok.
All the file-sync utilities that copy individual files perform well on large files, and are slow on file sequences. So if you deal with a lot of EXR renders, you’ll see a significant performance disadvantage compared to LucidLink. There’s no fix for that. LucidLink packages files up in larger blocks for transfer, no other tool does that currently.
Wasabi is not really an option. While they have favorable pricing, their pricing is predicated on being a backup, not a file sharing solution. They have limits on egress that make it poor fit. You can got with AWS S3, and that will work fine. But I would do the math to see if you’re really saving money. LucidLink uses AWS S3 storage and pays a bulk rate, while you pay the small business rate. You may not come out ahead. But Lucid’s pricing is more complicated these days, charging per member and then for extra storage. So you have to model it yourself.
One part I’m not fully understanding: It sounds like you already have LucidLink, but you are concerned about your NVMe size.
For LucidLInk your local cache drive should be NVMe. But your entire project doesn’t have to fit on it. As you upload the project into the filespace, you can copy more data than fit on your NVMe, LucidLink will automatically manage that. For LucidLink your NVMe is a cache, not storage.
Your storage limit is your filespace not your local storage. And the size of your filespace only depends on your credit card, not a physical limit. After you finished uploading, everyone can work, and your local NVMe only holds the files that you are working on yourself (or reviewing), everything else just sits in the cloud.
Lucid Link doesn’t serve your local storage to the inter web, it is your local storage, of which you cache locally with ssd/nvme storage.
Unless I’m missing something, Lucid Link has End of Lifed the Wasabit bit. Or, at least, they’ve been asking me to upgrade mine for the past year. They say it’s archival . It works for me and my use case, which is 10-30 artists banging on it daily. From edit to design to cg to Flame.
Ow! Man! I misunderstood how Lucid works. I had in mind that my storage limit was my local storage. So, I can exclude the AWS S3 from this equation and keep only the Lucid link, as long as I increase the Lucid storage amount. Once I upload everything to the cloud, I can unpin the media from my local hard drive.
That’s the theory, although I seem to have trouble clearing the cache once things are unpinned. Then when I go to pin new media, I get an error message saying the cache is full. The only solution I’ve found is to manually delete the cache folder and let Lucid rebuild the cache and DB. Love to know whether anyone is running the new Lucidlink and/or have figured a way of flushing the cache.
Well, that’s the downside with LucidLink. Their early versions were really good, but version 3 had a lot of problems with a rush roll-out. Overall I’m not a big fan of the product, but there are few alternatives if you really just need that use case.
That being said, I don’t tend to pin anything. For a whole host of reasons on the projects I work on I use LucidLink as a file transfer, and copy the files I need into may own local folder hierarchy. Seems like treason, but just works better for me. And without pinning I had no problems sharing files in a an efficient manner with the rest of the team.
That is correct. Wasabi is not available for LL3. It was available under LL2, but several months ago under false pretenses they decided no new file spaces are available, you can work with what you have, but that’s it.
My best guess is that they made a sweetheart deal original with Wasabi to offer storage without egress. Wasabi was ok with that for extra business. When LL 3 launched and didn’t include Wasabi, they told LL to take a hike, and then suddenly a story about a data center incident from months earlier was used to justify shutting Wasabi off without replacement. The kind of stuff that startups with investor pressure do.
Wasabi offers no-egress storage, but your total egress cannot exceed your total upload bandwidth. Meaning each file you upload should only be downloaded once at most, which is great for archive, but doesn’t facilitate file sharing which is 1 upload and many downloads. Totally fair for Wasabi to make this rule, which is why I only pay $6/TB on there, instead of $80/TB on LucidLink.
I was an early adopter of LucidLink and liked what they did. I’m not a fan of what they have done since, I no longer have an account there. Though I still use LucidLink on jobs where the production company has it. So I’m only a secondary users at this point.
I do use Wasabi for my own archive needs and am happy with that. For file sharing with teams I haven’t found a great replacement if the whole pipeline or VFX in particular involved. For edit projects we have successfully migrated to the Blackmagic Cloud. It work reasonably well, though is far from perfect (at least as of Version 19).
I experienced the same issues Angus mentioned with the latest version. However, I recently freelanced for a UK company that was using the Classic version, and it worked smoothly for me. No cache problems. I find copy and paste the best option, Jan, but for copy and paste faster, you need to pin, don’t you? If I do not pin, the transfer process takes more time.
Anyway, do you copy and paste manually, or do you use any magic sync software?
Well, pinning only downloads stuff in the background. At some point you do incur the transfer time penalty, it’s just a matter of when, and if you’re waiting.
If you pin something, it won’t be immediately available either, but you can go do something else rather than waiting for a copy/paste.
I do a mix of both. Sometimes I copy paste (or more likely use rsync or cp in the terminal), Mac copy/paste is the worst, because it tries to create placeholders and check space, etc. so that penalty is not actually LL but MacOS trying to be cute. But I do also use GoodSync at times for larger or scheduled syncs.
Thanks Jan. you’d think that some sort of cache clearing tool was top of the list.
I’m still using the classic version on IBM servers which seems to work really well. I do cache media when importing to flame, but only render straight to lucid (and reimport) then archive without cache.
In theory I could leave lucidlink pinning off, or at least only pin the minimum. But I find it gets messy or sudden slowdowns in flame if an element isn’t pinned.
Also trying to make it easy for my work colleagues - to not have to manually manage too much media.
I hasn’t helped that the last couple of jobs have had a series of 10-15 second very high res shots either!
Re: cache cleaning - I think you have to keep in mind how LL caches work. LL is not a file sync utility (ala Resilio, etc.), but a virtual filesystem with a cloud backend. Very different architecture. It was originally described as a Cloud NAS, and that is what it essentially is. Cloud synced block storage with a local virtual file system that makes individual files available to you.
If you actually look in the filesystem folder where the cache lives, you don’t actually see all the files you pinned, but just a single very large binary blob.
The way LL works is that it keeps a single large binary file on your cache drive (kind of a virtual disk drive), which on the inside consists of encrypted chunks of data (not sure of the size, might be 256KB or larger). Then it runs a virtual filesystem that understands the internal structure and presents the files on your mount point where you access them.
When you go to the folder where you see the files, they don’t actually physically exist there, they’re just virtually mapped to this large singular binary and encrypted file. In the background LL then just copies these blocks back and forth the cloud, which is why they work better for image sequences since they’re not transferring individual files, but large data blocks for which they can saturate the link to the cloud end point.
I think with cache clearing you may assume that you just flush the physical local copies of the file, but you’re actually just invalidating regions of this very large binary file. So it’s all a bit more complex, for good reasons, but not a literal file cache. If they write the code well (as they did in LL2) it’s a great system. If they have buggy code because they rushed it out the door under pressure, then you have the kind of stuff you’re experiencing.
The fix isn’t a better cache cleaning tool. The fix is for them to get their code to the same quality level LL2 was at.
Lucid does act like a NAS (Network Attached Storage).
However, this means that it appears as file storage NOT block storage and is (most likely) backed by object storage.
Lucid Link storage is presented to the user as a filesystem, like NFS or SMB/CIFS.
It’s subtly different to block storage which may appear to the user as a blank hard disk device, which you can format with your preferred filesystem.
Object storage is the dark art that underpins the world.
Anecdotally, Lucid pinning activities should be abandoned and be replaced by the (I thought I’d never say it) more reliable and predictable Autodesk Media Directory (stonefs) caching,
Yes, the user sees a filesystem like any other for the mount point of LucidLink. Most flavors of Unix/Linux support ‘virtual filesystems’, which are 3rd party solutions that appear to the user as a regular filesystem, regardless of what they do under the hood.
Early versions Lucid Link used the ‘Benjamin Fleisher’ kernel extension for the Mac, which is also used by LTFS and others. I think now they have their own implementation to improve installation.
The one nuance of LucidLink is that no filesystem activity on the mount point directly interacts with the cloud. It only interacts with the cache files, which are organized in block storage format, but live inside one large binary file on the local filesystem wherever you placed the cache folder. Then a separate part transports parts of this block data back and forth to the cloud.
So it’s a bit of a hybrid of what you describe. And at it’s core a good design. But also complicated which means it needs solid dev efforts.
Would you mind elaborating on this? Does doing Flame things to unmanaged LL media often run afoul of the LL client’s ability to keep things on the fast, local LL cache storage?