Hmm. That matches my settings - running version 8.5.7. Curious if you loaded the connection profiles for Wasabi under profiles. That’s how I connect, thru a bookmark that is created. Inside of it, I have transfers set to Default. I tried the alternative Mulitple Connections as well, but saw no change. I opened a support ticket with Wasabi, so we’ll see what happens. Thanks!
you are in the giant web and headaches with all these upload settings which made me just go unlimited dropbox and their app does it all for you so that’s the way I went, so basically I get unlimited storage for 700 a year and the files upload through their app and I haven’t had any issues and I store something around 40 tb , I got tired of the errors uploading on wasabi when I would get half way through a 250 gb upload and it would fail so far I never get that on dropbox.
Yeah, I am looking into it as well. It all sounds great were it not for the few times Dropbox locked me out due to overuse. Couldn’t get back into my files for 24 hours despite upgrading immediately. Just rubs me the wrong way as a business model. But I totally agree, whoever can make this as painless as possible wins.
Saying adios to LTO, part MMXXIII: Looked into the Dropbox Business Advanced Plan. Unlimited storage at $288/yr per user – That’s insanely cheap. Plus, it’s indeed very easy to drag and drop via the desktop app on a Mac. (The DB Linux app is .) Just connect to the local server and move 'em over. Setup selective sync and online-only so it doesn’t eat up your local space. Forget all the archaic FTP nonsense.
Limitations: individual files must be less than 2TB. Ok, I guess. In that case none of my older, bloaty cached archives would upload. But by using an uncached published workflow – this works very well. Plus, unlimited space means compressing isn’t always necessary.
I tested a 200GB project folder, and the time it took to move over was about an hour and 15 mins. So far, this seems to be the way to go. My only fear is the draconian cutoff measures Dropbox will take if you violate the limitations on the account. In the past, this was not on my Business Account, so maybe I’m overthinking it. The Advanced plan allows for 4TB of traffic per day, and that’s a lot. If I was to violate that limit, then any high traffic shared links would go inactive. But the account would still be active and available online. So this is all sounding good to me.
This thread has a bit of whiplash, but now I really don’t know what the benefits of using Wasabi are.
Wasabi as a business came out and wanted to be a single purpose cloud storage provider, as compared to AWS, Azure, Google which are all full featured clouds. That allowed them to provide better pricing, but also exposes them to economic storms as there are no other aspects of the business to buffer unexpected trends.
The benefit of these barebones cloud providers is that there is no-one making draconian rules you don’t have influence over. Anyone who sells you unlimited storage for $288/yr has to average the actual cost over the pool, like an insurance company. That’s why they make the draconian rules if someone becomes an extreme outlier, because it breaks their assumptions and math. Go with Wasabi or AWS you pay metered prices for your actual usage, and they really don’t care how much you use it, because you pay for what you do. If the price is too good to be true, well it probably is. $288/yr for unlimited storages falls into that bucket.
The other issue with any of these cloud providers is that if one of them goes out of business, there usually is no time to pull your archive down. More than a decade ago a lot of photographers lost their valuable online archives when Digital Railroad went brankrupt. There was a run on their ftp servers (not unlike the SVB cash withdrawels) to get as many files in the two remaining days. Of course they crashed under the onslaught and it was all over. So cloud storage is great as interim archives or archives you don’t care if you actually lost them. The risk is higher on Wasabi than DB or AWS because those are diversified businesses. That is not to say they may decide to make a sudden change and you’re out of luck. For an archive you truly care about, you want to own the storage, not rent it.
But I get the convenience of the cloud. As in all well designed archive/backup strategies you should have data tiers that are handled differently for their value and retention requirements as well as cost. There is no single storage solution that is the golden key.
A new video about LucidLink Filespaces was just published on the Flame Learning youtube channel…
that’s a lot of reading into it and I tend to not think about cloud storage this much, I think its pretty simple , dropbox cost is very good if your using large amounts of data, upload and interface is stupid simple and the ability to send people links etc gives it greater functionality over wasabi. I have been using it solidly for about 5 years now and 0 complaints , sometimes I wish it uploaded faster but they are all fairly slow. As far as Lucid that is taking an existing NAS and putting it to the cloud I am trying to offload files from my NAS so that really doesn’t work for me in this scenario .
Big fan of LucidLink if you need CloudStorage. I think I mentioned it higher up in the thread. Super simple to use, no contracts, no funny business. Use and pay for what you use, at reasonable cost.
One additional interesting feature of LucidLink is that it offers snapshots (similar to TimeMachine on Mac), so not only can you keep your folders synced, but you can also have some built-in idiot proofing against accidentally deleting a file. A synced folder will happily delete your online copy too unless you catch it quickly.
LucidLink brands itself as the CloudNAS. And in some ways it’s a good analogy. The reality is that it shows up as a virtual drive / volume depending on your OS that has unlimited size. Think of /Volumes/CloudNAS on your Mac being a 5,000TB drive. Just keep shuffling those bytes on it, it will never ever be full. Yes, you use a local cache drive, but that’s just that - a cache of files you’re currently accessing. Once it’s full it swaps out what it uses. Your local drive is not a limiting factor to storage size on LucidLink. Technically you can use a 100GB cache drive attached to a 5,000TB filespace. In reality a 1TB or 2TB NVMe will be better.
That said, if DB works for you and has for the last 5 years, nothing wrong with that. As the saying goes - if it ain’t broke, don’t fix it.
Yes, LucidLink looks amazeballs. It’s great to see a future where we can effortlessly collaborate with other artists. DB is perfect for what I need now – a repository (graveyard) for old commercial projects. Rarely do I have super complex batches. And even then, rarely does anyone ever return to a comp years after it’s been built. If that becomes a thing, then I will embrace storage on Wasabi. While LTO seems much more failsafe, the headache of maintaining a library on crappy software is wholly unnecessary.
Actually, I could see a use case for Wasabi when synching local drives at night, for safety purposes. But if your local NVME drives run at RAID 10, what’s the point? Maybe if a fire or flood engulfs the machine, I guess.
In fact that is exactly what I use LucidLink for. Every evening when I wrap up, I run a folder sync utility which copies the latest version of all the project files (not sources) to my LucidLink file space. The camera originals I can always get again, but the project files is where the pain lies and the client expects results. As a proper backup there should be one off-premise copy if possible. My Lucid file space is relatively small (1-2TB of project files). Because it also does snapshots it does provide me with some failsafe in case I overwrite a project file or delete it by accident, I have versions from a few days back. Or if I were the victim of ransomware, they can delete the current files, but not the snapshots. So worst case I can spin up a new system, get a copy of the sources, retrieve the latest snapshot and deliver to the client with at most 24hrs lost.
Part of the risk factor calculation is that I don’t have an office building, but work out of a suite at my home. So fire, flood, and other risk factors are for residential, not large commercial buildings. Not that either one is 100%.
Backups are not everyone’s favorite topic. Early in my career I worked for 8 years in Enterprise Data Storage, so I’m biased.
To keep it super simple, remember the 3-2-1 rule:
- 3 different copies (original + 2 backups)
- 2 different media/systems
- 1 copy offsite
That protects you against the primary risk factors. The easiest implementation is this:
Original / Client files:
- Keep original drive you received. After copying it, take it offline, but keep at office (copy 1)
- Make 1 copy to your working drive (Raid 1, 5, 10) (copy 2 / media 2)
- Make 1 copy into the cloud or on a drive that you take to an offsite location (offsite copy 3) - if the client is easy to work with and you know they kept a copy, you can count their copy instead of making one of your own.
Project Files / Renders:
- Originals are on your working drive (copy 1)
- Mirror project files to a second drive (does not have to RAID, though it doesn’t hurt), should be a separate system (NAS, other computer) (copy 2 / media 2 - independent system)
- Mirror project files to the cloud (offsite copy 3)
Make sure mirrors run once a day.
For what it’s worth, I think I’ve settled on a system that works on my budget. Wasabi for nightly backups, managed by the Linux CloudBerry app (MSP 360 now). It will natively compress and encrypt your data, which is amazing. My Wasabi archive won’t get over 15TB, so I’m maybe spending $100/mo on mirrored backups.
When the project goes cold, I compress it using 7zip on Linux, which I’ve found to be the fastest. Its CLI is multithread, and is way faster than Keka. For a 1TB project on my Threadripper it only took about 3 hours. Nuts.
When that’s finished, I move that 7z archive over to my Dropbox Unlimited for cold storage. Because there is no storage limit, and the links will go cold – I don’t have to worry about draconian shutoffs.
LTO is hiding on the sidelines for jobs that require it.
Fabulous! Thanks for the update.
That’s a great question! Cloud storage is definitely becoming a preferred option for long-term archiving. Platforms like AWS offer flexible plans, but for artists looking for an affordable and efficient alternative, services like TeraBox are worth exploring. They provide generous free storage and options for long-term backups with minimal costs.
Plus, the convenience of accessing your archives from anywhere and not worrying about physical storage failure is a big bonus. Are you considering any specific AWS storage class, or just exploring options?
Not sure the words ‘free’ and ‘archive’ make good bed fellows. If you look at the recent drama around LucidLink it’s very clear that cloud storage pricing is a tricky business, and you can end up on the wrong end quickly.
this is only a thought experiment but justify:
client storage vs artist storage…
and justify each…
I think this is the important bit. If a client wants an archive then you price out what it is going to cost them on AWS or whatever cloud service you want to use, add your margin then charge them for it.
If you are doing it for your own piece of mind then what is it worth to you?!! If you are doing it in case a client wants revisions then see above.
Still hoots love an LTO solution if/when available.
Any tell the best cloud storage app"