I have a 4TB NVM drive, used as Flame’s ManagedStorage for one particular project. It’s Thunderbolt-USB connect to my iMacPro. It reports as 3.84 TB Capacity, with 3.04 currently used.
I was unable to copy the contents to another 4TB drive. Said the drive was full. Upon investigation, the Mac Finder reports the ManagedFolder1 as 5.13TB.
I copied that off to my Raid, and sure enough, it appears to be 5.13TB.
Anyone have an idea why the NVM reports it as 3.04TB? (My goal here is to re-purpose the NVM, and put it’s contents on a different 4TB ssd drive.)
Is there anything else besides Flame’s ManagedStorage on that drive?
A couple things I can think of:
Because I have Google Drive installed on my machine, Finder says my user’s Home folder is 234 TB. That’s because 233 TB of that are actually in the cloud. Do you have any cloud services like DropBox or Lucid on the disk?
It’s possible that soft or hard links to files on the disk would cause the OS to overestimate how much space is taken up. I don’t know how you’d end up with 2TB worth of those, unless you also had backups like Time Machine on the disk.
No - it’s literally just the ManagedFolder1. I did see some invisible files in a dot-mirror, but that only reports as 200 MB.
The only thing I can think, is that I was using it with 2025, and when I upgraded to 2026, set that drive for the storage path, and when I upgraded the project, it may have duplicated files inside the new project media folder - but somehow the duplicates are links/aliases and not really doubling the data? But MacOS Finder doesn’t know that?
Just a side note and observations. I see you use rsync on macOS. The built in is usually quite old. You may benefit from updating to a newer version through homebrew or other method. Also, I’ve never seen an rsync instance that didn’t benefit from a dry run first. Allows you to catch a mistakes in your logic or paths before something gets deleted if you run it with the option to delete or overwrite something important
Not sure if this applies to Flame files, but one possible explanation could be ‘sparse files’, meaning files that have a specific length but with gaps which do not take up storage. They do exist, but can cause confusion to various tools that look at sizes - be it ‘length of file’ or ‘actual storage used by file’.
This is a technical explanation why these numbers can deviate. Not sure if this applies in this use case though.
Sparse files are sometimes used for databases or caches, where you want data to be at specific locations in the file, but not all data is actually used.