NVIDIA vWS

Just wondering if anyone has had a play around with NVIDIA vWS as a remote platform? Not sure if Flame would run on it or not (Vulkan is supported) but I have seen test results with both Resolve and Nuke on it which were impressive.

Thought one of the tech savvy amongst us may have experience with it as a platform. Doubt it comes cheap, even though it talks about value.

Why?? Cloud workstation like any another. Nothing special here. Maybe price but whatever.

You can run locally. Could be an easier solution to administer over a VM stack. May be no different, hence my interest if anyone had played around with it. The issue would more be the Remote Desktop solution but was also interested if there was some native client for vWS.

Is it really a proprietary Remote Desktop app/ Teradici replacement? The literature doesn’t seem to be clear on that.

If all you want to do is virtualize your workstations, Proxmox can do that.

My understanding is that it is a VM replacement more than a Remote Desktop solution, but that they get better GPU performance than a VM or stack does due to the way it runs.

The majority of what I have seen is CAD/Revit based performance improvements on the same hardware but was wondering if that would flow through to what we do. Ray Tracing performance is better. I’m guessing that there is a layer of communication removed by running the virtual workstation directly on the GPU?

I haven’t found enough info on it though to be sure.

But why deal with all that crap? Just bare metal and be done with it.

Depends on the hardware right? Could be more cost effective for remote workstations to set up servers with Tesla GPUs using something like vWS. I know DNEG have got a setup utilising vWS.

1 Like

scaling, if you need 200000 workstations baremetal is a dangerous proposal vs having clusters of virtualized ones.

very likely not relevant for any companies flame deployment :joy:

also cheaper than renting them on aws as these ressources are all taken up by ai research.

I’ve never been in a situation that having <100% available performance of a machine was an acceptable scenario.

Sliced virtualization doesn’t make sense for Flame workload.

1 Like

I think that depends on the work you do and what the other slice would be used for. I can see someone running Windows in the other slice to switch to Adobe apps as needed for some assets rather than dual booting. Not in a big pipeline, but a smaller setups. In this case the slices aren’t necessarily competing for resources in a way that would be unacceptable.

Not a fan of it, which is why I maintain separate systems. But it’s a valid scenario.

Not everyone runs Flame in a maxed out workload all the time.

I an with alan on this one, this overhead of running Virtual workstations for smaller shops is way mire than just buying a additinal machine, its peanuts ti get a mac mini for adobe or whatever.

You would need crazy clusters with like multiple (as in more than 2) epyc cpus and Nvidia A100 or whatever giant GPUs to really make any use of virtualisation and scalability.

there is a reason cloud workstations cost so much money if you dont use 2000 at the same time…

Not disagreeing. There are many tradeoffs. My comment above was just a reaction to ‘never … an acceptable scenario’. Not everything you do on Flame slurps up every available ounce of processing power or you loose productivity and money Some pipelines for sure, others not necessarily.

This is also different if you’re a facility where everything is task optimized, or if you’re an individual or small shop where a lot more multi-tasking goes on. Not everyone has 20 Flames, or 200,000 render nodes.

I’m generally more in the camp of have separate workstations for different tasks. It’s useful to have a mix of Win/Mac/Linux systems, and different hardware configs for different tasks.

For smaller shops an extra system is still an expense. Not just for the hardware, but also all the stuff that goes with it, from desk space, to I/O, to OS upgrades.

Everyone should do what works for best for them for whatever their reason is. It may be beyond your imagination of why that may make sense to them. If that means someone runs a slice for After Effects, because they have setup VMs a million times and can do in their sleep, but they don’t have any more desk space in their home office - well, more power to them.

Here’s the thing, and everyone looks at it from their own viewpoint, I am very much not running a small shop at times and the size of the scale up could be many multiples of our regular team number but on the reverse, sometimes we scale down to the bare minimum and can also be anything Inbetween. VMs give flexibility and maximises infrastructure usefulness over a 5 year lifecycle. @finnjaeger & @ALan are running something very different from what I am so their own perspectives will be different from mine and what I am trying to do. I am also considering bare metal options as well as a mix of the two.

Going down the path of a Virtualisation server, you could potentially run one very powerful high spec Flame/Houdini/Nuke system for a couple of months. That may then turn into a couple of mid spec Flame/Houdini/Nuke/CG VMs for a couple of months. Then it could even turn into 3-4 Paint/Roto VMs running Sillhouette or Mocha Pro for a month or two. When you aren’t running the server as a workstation in converts into multiple render nodes. What software and personnel we run is very different depending on the project so self hosted VMs make a whole lot of sense to our needs.

In terms of power draw, a server running Epycs and either L40 or L4 GPUs draws a hell of a lot less power than a bunch of Threadripper workstation with RTX 6000 or 5000s. Cost wise, for a maxed out workstation vs a server, there isn’t too much of a difference on hardware but space and power draw required is considerable. Performance wise, sure a dedicated bare metal workstation is definitely a better way to go, but if I was to purchase 40+ maxed out workstations then have a whole heap of them sitting idle or severely overpowered/under utilised for several months of the year then that is a lot of extra unnecessary cost.

So getting back to my original question, has anyone played around with vWS? I am assuming not.

1 Like

What is unique about vWS. That I still don’t understand. What are the benefits of that over other hypervisors?

There are lots of videos on this topic, but you can already split GPUs with Proxmox. I’m just not understanding vWS.

1 Like

Here is an output from Chat GPT. A lot of it sounds like marketing hyperbole but that is why I am asking if anyone has had real world experience using it.

NVIDIA Virtual Workstation (vWS) and hypervisors serve different purposes in virtualized environments, but both can provide distinct advantages depending on the use case. Here’s a breakdown of the benefits of using NVIDIA vWS over a traditional hypervisor:

  1. Graphics Acceleration and Performance
    • NVIDIA vWS offers GPU-powered virtual desktops, allowing for high-performance rendering and graphics-intensive workloads. It provides access to NVIDIA’s professional-grade GPUs, such as the Quadro or RTX series, which are essential for tasks like 3D modeling, rendering, and AI/ML training.
    • Hypervisors typically do not provide the same level of GPU acceleration unless paired with specialized hardware (such as NVIDIA vGPU or other GPU pass-through technologies). While virtual machines (VMs) managed by hypervisors can run general workloads, they often struggle with demanding graphical tasks without dedicated hardware resources.

  2. Seamless Graphics Experience
    • With NVIDIA vWS, users experience a near-native performance level for GPU-accelerated applications, especially important for professionals in fields like CAD, media production, and scientific visualization. This includes features like hardware-accelerated video decoding/encoding, ray tracing, and the ability to run multiple graphical applications in parallel.
    • Hypervisors without GPU support do not deliver the same seamless performance for graphical workloads. Even with GPU pass-through capabilities, it might require more complex configurations and come with potential limitations in performance and user experience.

  3. Support for Virtual Desktop Infrastructure (VDI)
    • NVIDIA vWS is tailored for virtual desktop infrastructure (VDI) use cases. It enables users to access high-performance virtual desktops remotely, providing consistent and high-quality user experiences, especially for remote professionals or teams working in distributed environments.
    • A hypervisor, on the other hand, is typically designed for general-purpose virtualization and may require additional configuration or third-party tools to enable similar virtual desktop support. VDI platforms such as VMware Horizon or Citrix might rely on the underlying hypervisor, but they often need specific configurations to handle graphical workloads.

  4. High-Quality User Experience for Professionals
    • For users working with graphics-intensive applications (e.g., video editing, 3D design), NVIDIA vWS provides features such as better GPU resource allocation, multi-monitor support, and support for industry-standard applications (like AutoCAD, SolidWorks, and Adobe Creative Cloud). It enables a better virtualized workstation experience compared to a standard VM hosted on a hypervisor.
    • Hypervisor-based virtual machines without GPU acceleration may provide a subpar experience for professionals who rely on high-performance graphics, limiting their ability to work efficiently in a virtualized environment.

  5. Simplified Management of GPU Resources
    • NVIDIA vWS allows organizations to allocate GPUs dynamically to virtual machines, enabling fine-grained control over resource allocation and optimization. IT teams can monitor and manage GPU usage effectively across multiple virtual desktops, ensuring that resources are efficiently utilized.
    • While hypervisors can support GPU pass-through or virtual GPUs, managing this setup can be more complex. It often requires specialized drivers and manual configuration, adding layers of complexity for system administrators.

  6. Cloud Workloads and Scalability
    • NVIDIA vWS is designed with cloud-scale workloads in mind, making it easier to deploy and scale GPU-accelerated desktops in the cloud (such as on AWS, Azure, or Google Cloud). This allows organizations to provide remote users with high-performance workstations without needing on-premise hardware.
    • Hypervisors, particularly those managing traditional VMs, might not have the same level of cloud-native integration, requiring additional configuration to deploy GPU-powered VDI solutions. While it’s possible to use hypervisors for cloud workloads, NVIDIA vWS is optimized for remote GPU access.

  7. Cost Efficiency for Graphics Workloads
    • NVIDIA vWS can provide cost efficiencies for organizations that require GPU resources for specific tasks. It allows multiple users to share powerful GPUs (using NVIDIA vGPU technology), optimizing costs by consolidating GPU usage across virtualized instances.
    • On the other hand, traditional hypervisors using GPU pass-through often require dedicated GPUs for each VM, which could be less cost-effective, especially for smaller teams or workloads that don’t need full GPU power all the time.

Summary

NVIDIA vWS provides specialized support for high-performance, GPU-accelerated virtual desktops, delivering benefits like seamless graphics performance, simplified management, and scalability for remote workforces. While hypervisors like VMware or Hyper-V are great for general-purpose virtualization, they typically lack the same level of support for GPU-intensive workloads unless combined with additional technologies like NVIDIA vGPU, and even then, the management complexity is higher.

In short, NVIDIA vWS is ideal when you need high-performance graphics, particularly for remote users and GPU-heavy workloads, while a hypervisor is more suitable for general-purpose virtual machine management without the need for dedicated graphics resources.

So in essence, the claim is that for GPU heavy workloads that vWS may have some advantages over a hypervisor, even with GPU past through (vGPU on Proxmox if I am not mistaken?)

Maybe at AWS scale, but otherwise sounds like lots of babble.

Maybe it’s just more convent tools to the same function of splitting GPUs.

One thing though you might want to look at. I believe with LXC containers, you can truly share GPU without any splitting. It’s just one pool of GPU, instead of many smaller split GPU.

Might not be appropriate for Flame, but the other apps you mentioned might be perfect.

1 Like

I’m totally open to any suggestions. vWS was suggested to me and I knew nothing about it. Hence my asking the question. The VDI is supposed to be a strongpoint of vWS so would be interested to see how it stacks up as a Remote Desktop solution vs a Parsec (no Linux yet but apparently it is coming) or a PCoiP solution like HPAnywhere or NiceDCV.

Am already aware of Proxmox and that it utilises vGPU (also NVIDIA software) and it is definitely one solution we are already considering. One of our support providers also have their own custom OpenSource stack that could potentially be a solution as well. Just exploring options at this stage.