Is anyone using Projectserver with 2026+ on Mac Clients?

honest question:

what is the Database for if flame wants to access all these files all the time via NFS?

the current use of the. SQL database in flame 2026+ is a very basic 2 column list of Frame UUID and Path to Frame. That is all. Nothing more. Not Library structure. Not Batch Group contents. Not Timeline. Nothing more than Frame UUID and Path To Frame.

wow, great, so amazing, very useful. not

if you make a new project, put a blur node inside batch, hit save project (flame fish → save)
how long does this take?
for me … 5 seconds. aka way way way wya way too long

Using a version of software newer than 2026.2.1 :wink: , I have an empty project, empty desktop, empty batch. add blur node, press iterate. it finishes almost instantly, like a second.

Now I am totally critical of the way in which Flame saves data, Library operations and what not. It’s terrible, and just on the threshold of infuriating in 2025. This is a big reason we are not yet on 2026+. ADSK is aware of it, and even though 2026.2.1 has measure to address this, they are still trying to optimize for future versions. The fact that there are thousands, to tens of thousands of tiny files written to keep track of “the working state of Flame” is a very legacy paradigm, and certainly is not taking advantage of the realtime time, tiny data optimized, nature of PGsql. I’ve never seen Resolve hanging for minutes while loading or saving a Timeline, or any module. Yet that is a frequent consistent occurrence with Flame.

If you have a spare machine with some storage, setup a test NAS using bare metal and RockyLinux with ZFS. Certainly the Qnap could be adding some bullshit here. Also, again, I just don’t believe in using macOS Flame in a studio environment like this. Too many quirks around networking, and all the other weird shit that running Flame on macOS brings. Suck it up, switch to DCV/Teradici and move forward. Parsec is great, but it isn’t enough to anchor yourself to a platform that is not conducive to your desired business model. Also, you really need to run central authtenication to help eliminate bullshit too, which your current scenario is like putting honey on a pile of shit and expect flies to not show up.

compression on zfs, albeit negligible in terms of cpu overhead per file, will accrue latency when saving thousands of small files.

this will be compounded by network, cpu, ram, storage, write speed, etc.

also, macos is not restricted to nfsv3, although there may be a dependency on resvport, but that’s not hardware.

there is a reason why the instinctual pipeline works.
unsurprisingly, it’s not some random fucking accident and just plugging in cables…

1 Like

I just tried making a random old intel nuc …

so qnap is adding bullshit i guess

intel nuc with a 2.5Gbit nic → stuff saves INSTANTELY! (just like the qnap, but qnap makes flame sad)

so i guess i need to drill down WHY my proxmox is having issues

[finn@vxfhost testnfs]$ iperf3 -c 192.168.10.229 -u -b 4G -l 1400 -t 10
Connecting to host 192.168.10.229, port 5201…..


[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.00 sec 4.66 GBytes 4.00 Gbits/sec 0.000 ms 0/3571302 (0%) sender
[ 5] 0.00-10.00 sec 2.68 GBytes 2.30 Gbits/sec 0.003 ms 1516514/3571261 (42%) receiver

42% lost? what in the actual world, i think we might have found the culprit,.. this is direct iperf between random machine and proxmox HOST system…

1 Like

Great detective work brother - get into it, get involved…

Organize Do Something GIF by INTO ACTION

I’ve definitely seen inconsistent performance with ZFS compression enabled. It’s really counterintuitive. In theory, with compression, you are writing/reading less data to disk, so it should be faster. But often it is actually slower, even with fast modern CPUs and lightweight algorithms like LZ4 and Zstd. We still run with zstd-fast since we see a 20-30% space savings, which at our scale can equal 100TB+.

hahah i am trying man…

so I hunted this down, it was the one linux client that shows this behaviour, the macs are actually all ok at least after a reboot of proxmox host and linux kernel upgrade i now get good data from iperf, straight up 10Gbit and all of that → looking great

however batch saves are still ~5s for a mostly empty batch, i am sad i dont know what else to even look into now :frowning:

1 Like

linux…
:rofl:

setup a bare metal file server. even if it small. Even a single NVMe should outperform 10gig network. Try NFSv4 mounts. First try XFS to eliminate any ZFS issue.

also… MTU on your network is consistent amongst all devices?

There are lots of things to look at. I can’t imagine that a Qnap device is optimized for large realtime throughput.

Of course, the retort is probably, none of the other applications are having issues with performance on the same setup. And to that, I agree. We’ve had many situations where Flame is absolutely fucked, yet Resolve, Nuke, Maya are just chugging along perfectly fine.

1 Like

This is exactly what I’m talking about - scientific method…

Empirical and anecdotal evidence are less valuable

yea allready there, the intel nuc did fine, it was fast.

something with the proxmox vm itself.. made a new one, same thing, something is fucked.

i am going nowehere, no matter what I try or do - saving to projectserver is super slow .

Pulling my hair our, really.. aweful

resolve is fast, always fast, running on a toaster, flame is bad.

are you running MTU 9000?

We don’t neccesarily save “to a project server”. We have a VM that only serves the DB, all data otherwise is saved to the NAS via NFS.

I don’t recommend actually using a VM as a location for saving bulk data.

1 Like

no 1500 throughout,

just tested something else

→ Made a NFS share on the Promox Host directly

→ Flame saves are fast

so that boils it down to the VM itself

1 Like

pipeline is nuanced and complex.
part of the complexity is designed and deliberate.

Yea thats what i tried with the qnap as project storage → much broken as we can see

so yea “saving project data” I mean My “project home” in 2026 keeping it on the same VM as the project server == shit

honestly same garbage in 2025 for me it also takes ~5s, different VM with completely different settings.

then it seems Qnap is the problem. Replace that garbage anyway.

Go bare metal. TrueNas, or RockyLinux minimal + ZFS.