SMB vs NFS

,

This has been popping up for some time, and autodesk advises to use NFS

as I have a new all nvme storage server for a shared stone - i wonder for my mac clients as to which is actually better ?

I cache prores444 or dwab exrs.

Blackmagic disk speedtest showed that smb was way way ahead in terms of speed, i got 10Gbit line speed with smb and more like 300/700 MB/s on NFS, flame internal benchmark showed again line speed for both smb and nfs.

I couldnt tell in a playback test which one was faster/less latency or anyhting, it really was… the same?

Its probably fine to use either but maybe anyone knows why adsk is saying to use nfs?

I don’t know a ton in this area- but for NFS have you enabled “jumbo packets”? it may bring up your speed in a generic speed test.

1 Like

I would expect that it’s related to the fact that Network File System is mature and despite some edge cases, it’s almost ubiquitously functional.

Server Message Block can be an unpredictable rat’s nest of ACL complication and drama on Linux, despite being relatively trivial on macOS and Windows.

SMB Direct with the right cards will permit line speed throughput with very low CPU overhead.

I’m curious to know if you’ve experimented with thunderbolt networking and SMB on your flames, Finn?

Take a look in this file to see how you might configure multiple interfaces to carry metadata and data separately:

/opt/Autodesk/cfg/network.cfg

You may find this section interesting:


[Interfaces]

# Comma separated list of the local network interfaces to be used for
# metadata operations. Metadata operations are usually small IO operations that
# may degrade performance when done on a high speed network adapter when a
# lower speed adapter can be used instead. If a metadata operation is too big
# for this lower speed adapter, the IO operation falls back to the adapters
# defined in the "Data" token described below.
#
# The order of the interfaces in the list is the order in which they
# will be tried. '*' can be used to denote any interfaces. The loopbacks are
# implicit and always preferred.
#
# If left empty, all active interfaces will be used.
#
# Example: Metadata=eth2,eth1,*   (on Linux)
#          Metadata=en1,en0,*     (on Mac)
#          Metadata=eth2          (only eth2 will be used)
#
# Note: Stone+Wire uses a different interface mapping that is defined in
#       /opt/Autodesk/sw/cfg/sw_framestore_map
#
#Metadata=

# Comma separated list of the local network interfaces to be used for
# large IO operations (data or metadata). See also the "Metadata" token for
# additional information.
#
# The order of the interfaces in the list is the order in which they
# will be tried. '*' can be used to denote any interfaces. The loopbacks are
# implicit and always preferred.
#
# If left empty, all active interfaces will be used.
#
# Example: Data=ib1,eth2,*    (on Linux)
#          Data=en1,en2,*     (on Mac)
#          Data=ib1           (only ib1 will be used)
#
# Note: Stone+Wire uses a different interface mapping that is defined in
#       /opt/Autodesk/sw/cfg/sw_framestore_map
#
#Data=

1 Like

i have yet to run thunderbolt, thats next , i dont really need to deal with it at the moment.

1GB/s is plenty for our usecase with prores and dwab caches. its perfectly snappy yet to see if we experience any slowdowns in production.

might just go with nfs and see if it doesnt work out try smb.

on my test right thubderbolt networking was about 1.6GB/s

@finn - I admire your approach to experimentation brother, and applaud it.

There are many factors that could be affecting the situation:

Blocksize
RAID Type
RAM Allocation (if possible)
RDMA (I’m not sure if macOS permits this but I’m sure some clever wag has hacked it)
Bus Type
Cable Type

And then you have to configure both ends.
What is the read speed of the source volume?
What is the write speed of the target volume?
What is the data type being sent?
Is it constant or variable?
Is there pre-read happening?
Is there cache-ing?
Are the machines doing IO solely, or performing multiple tasks.

If you’re using ProRes or DWAB, and your source framestore is set to such and shared=true, do you actually need to transfer any media at all?

Can you do shared workspaces on the project server and no transfer?
(You may need to rope @ALan in!!!)

How well does that work with a mac project server with multiple mac ‘clients’?

No doubt, you will do an exhaustive investigation, and present your results to the community as you usually do.

Godspeed @finnjaeger - may the force be with you brother.

Start Here:

  1. What OS on your NVMe Server?
  2. How Are you Achieving RAID? Hardware/Software/Yolo Stripe?
  3. What File System?
  4. What Network Card?
  5. What Network Switch?
  6. Jumbo Frames on your server NIC and switch?
  7. What Network medium? Copper/Fiber?
  8. Are your macOS mounting NFS as v3 or v4?
  9. Have you set Jumbo Frames specific on macOS? You need too. Don’t rely on Automatic.
2 Likes
  1. What OS on your NVMe Server?

Linux

  1. How Are you Achieving RAID?
    Hardware/Software/Yolo Stripe?

ZFS software raid 0 because its just cache, nightly backups to external HDD

  1. What File System?
    ZFS

  2. What Network Card?
    Intel 10GbitE and 2 thunderbolt buses

  3. What Network Switch?
    Ubiquity 10Gbit whatever enterprise

  4. Jumbo Frames on your server NIC and switch?
    nope

  5. What Network medium? Copper/Fiber?
    Copper

  6. Are your macOS mounting NFS as v3 or v4?
    V4

  7. Have you set Jumbo Frames specific on macOS? You need too. Don’t rely on Automatic.

No

i need to look into if I really do want jumboframes, i probably do.

there is no rdma/smbdirect support on macos afaik.

Server is all nvme full of 990pros so its flipping fast as hell

regarding Jumbo frames

“We recommend using Jumbo Frames on all devices in the network, including the clients. If a device uses the standard MTU size of 1500 bytes, connectivity issues may occur.”

thats from unifi and one of the reasons my previous workplace didnt want this so i refrained from doing it as the switch does more than just the storage network stuff.

That is misleading.

If the switch is set to accept Jumbo, it still passes all packets. But if machine A is using Jumbo, and machine B is not using Jumbo, then you have problems. I think the fact that switches need to have Jumbo enabled (instead of just default on) is some legacy crap. No harm in enabling it, unless for some reason you want to ban Jumbo traffic.

The right way to do this, is to have 2 networks whether physical or logical (VLAN).

  1. Normal traffic, internet, administrative, local LAN type stuff. Standard MTU 1500. This way you know you’ll always be able to hit that host without any special config.

  2. Your high-speed data network. MTU 9000. Every device on this network needs to have Jumbo enabled.

And machines/devices can be members of both networks, as long as the interfaces have the appropriate MTU set.

An example for us, we have a copper 1gig network used for the basic stuff, SSH, machine configuration, normal traffic.
192.168.1.xxx “Automatic MTU” (1500).

We have a 25gig fiber network, just for high speed data.
192.168.10.xxx “manually defined MTU” (9000). Setting this on windows is in some obscure hardware settings tab, and the value is 8972 or something like that, look it up.

If you were to use a single physical network interface, you would do this all via VLANs which are quite easy on Unifi.

In regards to the ADSK network.cfg, just set it all to the highspeed interface. Most of the descriptions in their cfg files and documentation are from like 20 years ago when hardware was lame.

I know nothing about Thunderbolt networking.

1 Like

Shout out to the infiniband interface in the example config!

1 Like

sadly on unifi vlan routing is utter garbage peromance wise, it doesnt do inter vlan routing on the switch but on the router… might need a second switch

https://community.ui.com/questions/Inter-Vlan-routing-with-UniFi-Switches/baffa5cf-38bc-4d50-a370-3ab8b9e053b8

no dude, you don’t need to route between VLANs. They are separate data paths.

1 Like

if I want a producer from the mtu 1500 vlan to be able to access stuff on the mtu 9000 vlan I do, no?

That is why you have 2 interfaces. Normal and Highspeed. Physical (2 nics) or logical (VLANs).

1 Like

well yes but it all sounds like added complexity for a very small benefit? i need to test 9000mtu and see if it actually does anyhting for me

It’s not a small benefit. Jumbo frames massively reduce overhead on the switch.

really not much more complexity if you already have a 10gig nic and cable run. Making the VLAN in Unifi takes 30 seconds, then just setup the new interface on the workstation and server.

The highspeed VLAN doesn’t even need internet/dns/dhcp.

At the same time, there is a forever debate whether with modern hardware Jumbo is relevant anymore.

1 Like

You can also route traffic between two subnets without using VLANs.
If your switch is layer 3 you can use it to connect both subnets.

Here are the steps:

Configure IP Addresses:

Assign IP addresses to devices in each subnet.

Subnet 1 (192.168.1.0/24):
Hosts: 192.168.1.1, 192.168.1.2, etc
Default gateway: 192.168.1.254 (your layer 3 switch)

Subnet 2 (192.168.10.0/24):
Hosts: 192.168.10.1, 192.168.10.2, etc
Default gateway: 192.168.10.254 (same switch)

Enable IP Routing:

On your router or layer 3 switch, enable IP routing (sometimes called “IP forwarding”).
This forwards packets between subnets.

On the 192.168.1.0/24 router:

ip route 192.168.10.0 255.255.255.0 192.168.1.254

On the 192.168.10.0/24 router:

ip route 192.168.1.0 255.255.255.0 192.168.10.254

Test Connectivity:

Ping devices across subnets to verify connectivity.

thats the problem my switch is layer 2 :smiley:

1 Like