Ive got odd extremely slow-saving issues on my setup here and its been annoying and I am trying to investigate whats going on … stumbled across super bad IOPS
Happy to talk ideas if anyone has any ( looking over at alan here hahah)
Flame 2025.2 project server running on a Proxmox server
Proxmox underlying VM storage is a SAS -SSD array on ZFS
Bare-Metal test direct from proxmox-debian to the storage:
ProxmoxHost → ZFS
JOB BS PATTERN IOPS BW_MiB_s AVG_LAT_us P99_CLAT_us
seqwrite1M 1M write 4797 4797 0 0.002
randwrite4k 4k randwrite 117219 458 0 0.001
randwrite16k 16k randwrite 63177 987 0 0.001
randwrite64k 64k randwrite 31207 1950 0 0.001
so ~5GB/s sequential Writes everything looks fast and high IOPS here, certainly enough to save a batch setup fast…
ok so lets do the same test from inside the project server VM:
Flameingo002 → ProxmoxHost
JOB BS PATTERN IOPS BW_MiB_s AVG_LAT_us P99_CLAT_us
seqwrite1M 1M write 3012 3012 0 0.668
randwrite4k 4k randwrite 20666 81 0 0.09
randwrite16k 16k randwrite 17313 271 0 0.198
randwrite64k 64k randwrite 13500 844 0 0.212
we are looking 1K IOPS, but still 3GB/s and still plenty fast.
Same now for a Mac Studio via 10Gbit (MTU1500)
(iperf3 10GBit tested and confirmed of course)
MacOS client → Project server (BM diskspeed test W:160MB/s R:420MB/s)
JOB BS PATTERN IOPS BW_MiB_s AVG_LAT_us P99_CLAT_us ENGINE
seqwrite1M 1M write 73 73 13622 132645 psync
randwrite4k 4k randwrite 22902 89 43 449 psync
randwrite16k 16k randwrite 4537 71 220 469 psync
randwrite64k 64k randwrite 2122 133 471 4358 psync
ooooof, 73 IOPS for sequential reads? OUCH!! what is happening here, thats extremely slow. also notice the latency - CRAZY , its like writing to a floppy disk all of the sudden.
lets do another test from the same mac to a NVME QNAP thats in the same rack , connected to the same switch infrastructure e.t.c
MacOS→Qnap NVME NAS (BM Diskspeed test W:980MB/s R: 950MB/s)
JOB BS PATTERN IOPS BW_MiB_s AVG_LAT_us P99_CLAT_us ENGINE
seqwrite1M 1M write 144 145 6908 50594 psync
randwrite4k 4k randwrite 23480 92 42 408 psync
randwrite16k 16k randwrite 15633 244 64 313 psync
randwrite64k 64k randwrite 1646 103 607 4948 psync
still nothing to write home about but still twice as fast than to the project server?
MacOS mount options from nfsstat -m on the mac client (these are set by flame, I have no controll here)
NFS parameters: vers=3,tcp,port=2049,nomntudp,hard,intr,resvport,negnamecache,callumnt,locks,quota,rsize=32768,wsize=32768,readahead=16,dsize=32768,rdirplus,nodumbtimer,timeo=10,maxgroups=16,acregmin=5,acregmax=60,acdirmin=5,acdirmax=60,nomutejukebox,nonfc,sec=sys
On the Project server (also using ADSK defaults) I can see that SYNC is used instead of ASYNC as they want to protect against corruption, which is fine but … ~5s to save a empty batch is sort of not ok?
/opt/Autodesk *(rw,sync)/mnt/StorageMedia *(rw,no_subtree_check,async)
and just for fun, enabled async just for a speedtest:
MacOS → Projectserver with ASYNC BM speedtest W:250MB/s R: 400MB/s
JOB BS PATTERN IOPS BW_MiB_s AVG_LAT_us P99_CLAT_us ENGINE
seqwrite1M 1M write 122 123 8143 162529 psync
randwrite4k 4k randwrite 46187 180 21 281 psync
randwrite16k 16k randwrite 21795 341 46 239 psync
randwrite64k 64k randwrite 2760 173 362 1827 psync
2x improvement, much more reasonable.
However that makes NO difference in flame project/batch saving speeds at ALL
i am still looking at ~5s to save this very complex batch:
if anyone wants to play along.. here is my script that generates the above reports , change the testfile path to where you want to test
TESTFILE=${TESTFILE:-/System/Volumes/Data/hosts/192.168.10.170/opt/Autodesk/project/testfinnnn/lol}; \
ENGINE=libaio; [[ "$(uname)" == "Darwin" ]] && ENGINE=psync; \
run() { fio --name="$1" --rw="$2" --bs="$3" --size="$4" \
--ioengine="$ENGINE" --direct=1 --filename="$TESTFILE" \
--runtime=10 --time_based --iodepth=1 --output-format=json; }; \
TMP=$(mktemp /tmp/fiojson.XXXXXX); \
{ run seqwrite1M write 1M 2G; run randwrite4k randwrite 4k 512M; run randwrite16k randwrite 16k 1G; run randwrite64k randwrite 64k 2G; } > "$TMP" 2>/dev/null || true; \
if ! jq -e . >/dev/null 2>&1 < "$TMP"; then echo "fio runs failed or produced no JSON. Check ENGINE=$ENGINE and permissions on $TESTFILE"; cat "$TMP"; rm -f "$TMP" "$TESTFILE"; exit 1; fi; \
{ echo -e "JOB\tBS\tPATTERN\tIOPS\tBW_MiB_s\tAVG_LAT_us\tP99_CLAT_us\tENGINE"; \
jq -s -r --arg eng "$ENGINE" '
.[] | .jobs[] |
[
.jobname,
.["job options"].bs,
.["job options"].rw,
(.write.iops|floor),
((.write.bw_bytes/1048576)|round),
((.write.lat_ns.mean/1000)|round),
((.write.clat_ns.percentile["99.000000"]/1000)|round),
$eng
] | @tsv
' < "$TMP"; } | column -t -s $'\t'; \
rm -f "$TMP" "$TESTFILE"


