hahah…my first thought was “What’s the difference!!!” I know theres a difference, but… anyways. It could just be a poorly written 3 word description some marketing guru came up with.
It seems Definitely worth looking into. My Wasabi usage is much smaller than yours so the stakes for me are well…a lot less. My (Trademark Pending) “Heavily Compressed Flame Archives” for a year or so in commercials is a pedestrian 7TB. Just pruned a bunch of stuff and it was as high as 12TB.
yea I have a lot of stuff to archive that’s for sure, but the 7z really helps, you can do it command line as well just an fyi on Linux which is amazingly fast, if anyone is interested tar -cf - “file” | 7za a -si -mx=9 “file.7z” is the line and the number 9 is the compression amount lower to 7 for a little less compression
How does this work out legally? If a client decides to pay you to continue storing the files and somewhere during that time part or all of the files go corrupt and the client asks for them back and you can’t provide it because the files are garbage, what then? Are you on the hook since you are offering it as a service thing?
Or are you telling them specifically that it’s only a backup and you aren’t responsible for keeping the only copy? Then wouldn’t that reduce the value of the service? Are you also storing the files on another cloud service or medium to safeguard against this?
I don’t do it for them I do it for me if they come back in a year or two and reference the old spot or show then I can quickly reference them and its a better experience for us both, if I loose it then its on me and I pay the price as far as legally I don’t do it as a line item and the cost is just rolled into my invoices as either hours or rental.
How does it work out legally? Every job is a contract. And depending on what the client and/or project is, I build it into the bid/contract. My world is a little wacky though. I do some direct to brand, some direct to agency, some direct to vfx studio. Different things require different approaches. But for me, the years of holding all the projects for all of eternity just don’t make sense. For those that do, I do. For those that don’t, I don’t.
I’ve caught up with the cloud club here, and am finally ditching LTO as my archival format. Too many issues with ArGest/BRU to keep going at this point. Wasabi seems great. Nice and simple. One thing I am wondering – what’s the preferred upload method? Browser seems iffy. Cyberduck is suggested as a free option, but it’s not great. What are some options that take advantage of bandwidth speed, and can handle large files (2-3TB)? Happy to pay for it. Mac or Linux. Any suggestions appreciated, but keep in mind I’m kind of dumb and command line tools hurt my brain. GUI all da way.
I’ve had good luck with Cyberduck overall and it’s a tool that saturates links. With the one exception being Wasabi. I had a big client where we needed to transfer 3TB back and forth between LA and NYC via Wasabi, and we had to get senior engineers from Wasabi on the line to debug.
In the end we ended up using aws_cli with some custom parameters to saturate the link and it was fine.
If you’re into Wasabi, the other option is to use LucidLink which offers Wasabi as one of their backends at a good price. And since it’s a CloudNAS it simply gets mounted as a drive on your system and you can drag/drop. The LucidLink client will saturate your link easily. LucidLink eliminated egress fees, so if you ever have to download (not usually part of archiving) you don’t get killed.
That said, cloud is still somewhat expensive for long-term archive because you pay by capacity * time, where all physical medium is one-time cost of capacity. LTO still comes out ahead on cost at a certain volume, though the line is shifting all the time. AWS has some long-term archive like glacier, but they can be tricky to use (and incidentally I believe are LTO based because there’s a delay in retrieval).
I agree that LTO is at times a pain because most of the software for it sucks. But that matters more for incremental backups and more sophisticated setups. If you just archive, a simple LTFS setup that mounts the tape as a volume on your system can be quite simple to use.
I use a combination of LTO and Cloud. I use LucidLink for the daily project cloud backup and LTO for the archive once the job is done.
One other thing to consider is ransomeware hardened archives. Latest generation ransomeware has been known to figure out your cloud storage and wipe it too. Which is why some cloud providers now offer immutable storage. Or rely on LTO or other local storage that is truly offline and thus cannot be hijacked.
FileZilla Pro connects to cloud storage quite easily if that’s your style.
Or, the even better slightly upgraded version is to enter the Synology ecosystem. Its Hyper Backup is pretty solid at deduplication and compression, although obvs not as good as 7zing the whole lot. But Hyper Backup to Wasabi every night at 6pm, plus snapshots? Dope.
I will look into these. I did try FileZilla, but had issues getting it to install on Rocky. Again, not super Linux savvy here. Maybe Synology or LucidLink is the way to go. Cyberduck seems to crawl on larger uploads. I guess that’s the saturation issue.
update: I did try FileZillaPro, but it didn’t allow me to configure an S3 connection, so got stuck there. Maybe all for naught, if Cyberduck is essentially the same thing.
Would love to keep LTO running, but the need to go back to projects from over 3 years ago is just not there. I keep Masters and Generics on Dropbox – and 99% of the time, it’s all that’s needed.
Randy, do you compress 7z within Linux? Or do you use software? I’ve tried within Rocky, and it takes way too long.
In my freelancing days I used Chronosync to get my stuff onto Wasabi. It was pretty simple, create your bucket(s) on the wasabi website, then link a folder to it/them that updates with every file system change. I wasn’t compressing anything, because I couldn’t be bothered, but I suppose you could zip things before dropping them into the sync’d folder, or set up some sort of cascading line of chronosync jobs that compresses things, places them in the watch folder, and syncs that to Wasabi.
Super handy, and once I set it up, I never really thought about it again. Pretty sure it’s still running.
Did it saturate my bandwidth? No idea, but it was pretty quick on a 1gb Fios connection.
I seem to be having an issue with the FileZillaPro installation. The version that installs doesn’t have S3 as an option, despite paying / downloading / installing the Pro version. Reaching out to the FZP forum for answers on that.
I did try Chronosync. Pros - Very easy to use, lots of synching features, quick setup. Only issue is that I am seeing a 30mbps upload when our fiber u/l connection is around 300. Seems to be about the same as Cyberduck speeds. I don’t see a setting that is capping bandwidth and I am connected to the closest storage region – so why is it uploading so slow? Is that normal for Wasabi?
Looked all over the app. Couldn’t figure out why Cyberduck was so friggin slow. Chronosync might be more than what I need and it’s uploading crawled as well. I think the best solution is to simply compress 7z in Keka (love it - it’s quite fast). Then, move projects via ye ole web browser. Seems to work just fine and the speeds are near perfect. I was finally able to install FileZillaPro, but it would crash during setup with a segmentation fault. I’m not CLI savvy, so browser wins. If anyone has a better system, would love to hear.
Was that slow speed with Cyberduck connected to Wasabi?
If so, that could be because Cyberduck doesn’t ship with a good Wasabi profile, or at least didn’t a year ago. That’s where I needed the Wasabi engineers to tweak the S3 API parameters to get to proper speed. The defaults work with AWS, G-Drive, and Dropbox.
Also in Cyberduck the key configuration for speed is whether it uses fragments to upload. Meaning it will make many parallel requests to upload/download chunks, which is the primary way to overcome latency in the connection and saturate the link.
On the upload path, that is usually the default, and S3 will re-assemble the chunks into the full object at the end. On the download path, Cyberduck (and presumably other tools too), creates a folder where each chunk is a separate file. Once everything is downloaded, it reassembles this into a file. Which is why these tools often hover at 99% complete for a while. That’s the re-assembly part. For some reason Cyberduck is very slow doing this on a NAS. So I always download to a fast local drive and then copy the final files over to the NAS when that is the final destination.