Extremely low IOPS to Project server via NFS from MacOS

Ive got odd extremely slow-saving issues on my setup here and its been annoying and I am trying to investigate whats going on … stumbled across super bad IOPS

Happy to talk ideas if anyone has any ( looking over at alan here hahah)

Flame 2025.2 project server running on a Proxmox server

Proxmox underlying VM storage is a SAS -SSD array on ZFS

Bare-Metal test direct from proxmox-debian to the storage:

ProxmoxHost → ZFS

JOB           BS   PATTERN    IOPS    BW_MiB_s  AVG_LAT_us  P99_CLAT_us
seqwrite1M    1M   write      4797    4797      0           0.002
randwrite4k   4k   randwrite  117219  458       0           0.001
randwrite16k  16k  randwrite  63177   987       0           0.001
randwrite64k  64k  randwrite  31207   1950      0           0.001

so ~5GB/s sequential Writes everything looks fast and high IOPS here, certainly enough to save a batch setup fast…

ok so lets do the same test from inside the project server VM:

Flameingo002 → ProxmoxHost

JOB BS PATTERN IOPS BW_MiB_s AVG_LAT_us P99_CLAT_us
seqwrite1M 1M write 3012 3012 0 0.668
randwrite4k 4k randwrite 20666 81 0 0.09
randwrite16k 16k randwrite 17313 271 0 0.198
randwrite64k 64k randwrite 13500 844 0 0.212

we are looking 1K IOPS, but still 3GB/s and still plenty fast.

Same now for a Mac Studio via 10Gbit (MTU1500)

(iperf3 10GBit tested and confirmed of course)

MacOS client → Project server (BM diskspeed test W:160MB/s R:420MB/s)

JOB BS PATTERN IOPS BW_MiB_s AVG_LAT_us P99_CLAT_us ENGINE
seqwrite1M 1M write 73 73 13622 132645 psync
randwrite4k 4k randwrite 22902 89 43 449 psync
randwrite16k 16k randwrite 4537 71 220 469 psync
randwrite64k 64k randwrite 2122 133 471 4358 psync

ooooof, 73 IOPS for sequential reads? OUCH!! what is happening here, thats extremely slow. also notice the latency - CRAZY , its like writing to a floppy disk all of the sudden.

lets do another test from the same mac to a NVME QNAP thats in the same rack , connected to the same switch infrastructure e.t.c

MacOS→Qnap NVME NAS (BM Diskspeed test W:980MB/s R: 950MB/s)

JOB BS PATTERN IOPS BW_MiB_s AVG_LAT_us P99_CLAT_us ENGINE
seqwrite1M 1M write 144 145 6908 50594 psync
randwrite4k 4k randwrite 23480 92 42 408 psync
randwrite16k 16k randwrite 15633 244 64 313 psync
randwrite64k 64k randwrite 1646 103 607 4948 psync

still nothing to write home about but still twice as fast than to the project server?

MacOS mount options from nfsstat -m on the mac client (these are set by flame, I have no controll here)

NFS parameters: vers=3,tcp,port=2049,nomntudp,hard,intr,resvport,negnamecache,callumnt,locks,quota,rsize=32768,wsize=32768,readahead=16,dsize=32768,rdirplus,nodumbtimer,timeo=10,maxgroups=16,acregmin=5,acregmax=60,acdirmin=5,acdirmax=60,nomutejukebox,nonfc,sec=sys

On the Project server (also using ADSK defaults) I can see that SYNC is used instead of ASYNC as they want to protect against corruption, which is fine but … ~5s to save a empty batch is sort of not ok?

/opt/Autodesk    *(rw,sync)/mnt/StorageMedia *(rw,no_subtree_check,async)

and just for fun, enabled async just for a speedtest:

MacOS → Projectserver with ASYNC BM speedtest W:250MB/s R: 400MB/s

JOB BS PATTERN IOPS BW_MiB_s AVG_LAT_us P99_CLAT_us ENGINE
seqwrite1M 1M write 122 123 8143 162529 psync
randwrite4k 4k randwrite 46187 180 21 281 psync
randwrite16k 16k randwrite 21795 341 46 239 psync
randwrite64k 64k randwrite 2760 173 362 1827 psync

2x improvement, much more reasonable.

However that makes NO difference in flame project/batch saving speeds at ALL

i am still looking at ~5s to save this very complex batch:

if anyone wants to play along.. here is my script that generates the above reports , change the testfile path to where you want to test

TESTFILE=${TESTFILE:-/System/Volumes/Data/hosts/192.168.10.170/opt/Autodesk/project/testfinnnn/lol}; \
ENGINE=libaio; [[ "$(uname)" == "Darwin" ]] && ENGINE=psync; \
run() { fio --name="$1" --rw="$2" --bs="$3" --size="$4" \
  --ioengine="$ENGINE" --direct=1 --filename="$TESTFILE" \
  --runtime=10 --time_based --iodepth=1 --output-format=json; }; \
TMP=$(mktemp /tmp/fiojson.XXXXXX); \
{ run seqwrite1M write 1M 2G; run randwrite4k randwrite 4k 512M; run randwrite16k randwrite 16k 1G; run randwrite64k randwrite 64k 2G; } > "$TMP" 2>/dev/null || true; \
if ! jq -e . >/dev/null 2>&1 < "$TMP"; then echo "fio runs failed or produced no JSON. Check ENGINE=$ENGINE and permissions on $TESTFILE"; cat "$TMP"; rm -f "$TMP" "$TESTFILE"; exit 1; fi; \
{ echo -e "JOB\tBS\tPATTERN\tIOPS\tBW_MiB_s\tAVG_LAT_us\tP99_CLAT_us\tENGINE"; \
  jq -s -r --arg eng "$ENGINE" '
    .[] | .jobs[] |
    [
      .jobname,
      .["job options"].bs,
      .["job options"].rw,
      (.write.iops|floor),
      ((.write.bw_bytes/1048576)|round),
      ((.write.lat_ns.mean/1000)|round),
      ((.write.clat_ns.percentile["99.000000"]/1000)|round),
      $eng
    ] | @tsv
  ' < "$TMP"; } | column -t -s $'\t'; \
rm -f "$TMP" "$TESTFILE"

Project Server really should just be used for PostGresDB. Actual data should be saved on NAS.

1 Like

This is for flame 2025.2

but yes in 2026 saving project data straight to nas is faster.

even so. Our legacy Project Servers still only house the old S+W based database. No actual image data is stored on them.

What causes slow saves is the saving of the project data that is always on the project server or can I change that for pre 2026?

the stuff that goes into /opt/autodesk/projects..

my image data and framestore are of course seperate nas

setups can be set to save anywhere you want at the project creation window.

on 2025? oO

correct

1 Like

ffs so thats what the “setup directory” button does.

I must have been blind oO

thanks will give it a go and see about special NFS stuff needed for it …

1 Like

Low-key laughing my ass off at “that’s what that button does?”

like the one option not buried in config files…

3 Likes

seriously laughing at myself, I just completely ignored that (probably because anything useful requires config changes as you said lol)

No idea what in the world constitutes a “setup” and why you would like to share them across projects. I assumed its something totally different (and it sort of is)

anyhow sadly that button isnt the fix either tho, as it doesnt write batches to that directory when you hit save on the whole project so thats still slow :frowning:

These are the files changed when you change something in a single batch on your desktop and then hit save for the whole project, this takes ~5s and thats the root of my problem

setups Directory is default /opt/Autodesk/project soo let me get this straight:

opt/Autodesk..

/project/ = Setups (like a explitily saved batch)

/clip/stonefs/.prj = the actual project

I guess that makes TOTAL sense doesnt it ? the project folder is a setups folder the clips folder is the project folder.

the hangover math GIF

btw firther investigations yielded nothing, my projects take forever to save borderline unuseable.

No matter what i do on my proxmox host, performance is compeltely garbage.

Have a project with 100+ sequences, takes 10min to autosave i am going crazy this cant be normal - wtf.

  1. Can you compare the same project when created on a flame rather than project server?
  2. Are your timelines full of BFX?

its faster local.

Project has 0 BFX, but a bunch of MKs in action on the timeline.

and I’m still working with ADSK for almost 3 months on a crashing problem which prevents use of 2026.x, with no progress. Something is very wrong.

1 Like

yea cant use 2026 either, and on 2025 i cant change the location of where these autosaves are going.

Can I ask one thing? How many seats are doing editorial versus how many are just doing shots?

with 2025, you can point the /opt/Autodesk/clip data to live in a different location if you want.

–dataroot /mnt/swdata

1 Like

all my shots people are on local projects not on the setver . :slight_smile: they only write openclips and batch setups. its usually just 1 to a max of 3 concurrent project/timeline users.