Virtualization Brings Engineering Computing Beyond the Desk

The workstation takes on a new life via virtual desktop infrastructure.

Ben Fathi, CTO at VMWare (left) and Jen-Hsun Huang, NVIDIA CEO (right) at NVIDIA GPU Technology Conference 2014, announcing NVIDIA GRID would run on the VMware Horizon DaaS Platform. Image courtesy of NVIDIA.


Ben Fathi, CTO at VMWare (left) and Jen-Hsun Huang, NVIDIA CEO (right) at NVIDIA GPU Technology Conference 2014, announcing NVIDIA GRID would run on the VMware Horizon DaaS Platform. Image courtesy of NVIDIA. Ben Fathi, CTO at VMWare (left) and Jen-Hsun Huang, NVIDIA CEO (right) at NVIDIA GPU Technology Conference 2014, announcing NVIDIA GRID would run on the VMware Horizon DaaS Platform. Image courtesy of NVIDIA.

Among professional engineers and designers, the network computer is not an attractive term. It’s associated with a loss of control over the machine. It’s a computer that sits far away from the user, under the tight scrutiny of the IT department. Even if the server room that houses the machine is only a few feet away from the user’s desk, bureaucratic “red tape” keeps the user at bay. Need a RAM upgrade to make your assemblies load faster? Need to install a plug-in to make your CAD conversion easier? Submit a requisition slip and wait. By contrast, the desktop workstation, one that physically sits on the user’s desk, is viewed as unfettered access to the hardware, and perhaps even a status symbol for the user.

So why is the virtual machine, which is essentially a network computer, enjoying a resurgence among some engineering and design firms, even among some universities? (Read “PSA Peugeot Citroën Drives Virtual Desktop Infrastructure Forward”). The answer rests with a new generation of virtual desktop infrastructure (VDI) solutions and better middleware that seamlessly connect the user to the remote hardware. The stigma of the network computer fades away when users realize the virtual machine delivers performance and power that rivals a desktop workstation. The firms deploying virtual machines now include not only design and engineering firms but design software makers. Autodesk, a household name in CAD software, not only offers software certified for use in VDI environments; it’s using VDI to test and run simulation of its own software internally.

Test-Driving Design Software on VDI

Autodesk’s software lineup includes some that are known for heavy system and graphics demands. VDI opponents and skeptics often cite Autodesk Inventor and Revit as the reasons designers and engineers would insist on physical workstations. With the option to display detailed mechanical assembly assemblies and full building projects in photorealistic mode, these software titles exemplify the engineers’ needs to rely on professional workstations with GPUs (graphics processing units) — sometimes multiple GPUs in a single machine — to do their design work.

In late 2013, Autodesk began offering potential customers the ability to test drive select titles remotely. The company has long offered people the option to download and test its software in a time-limited trial mode, but the remote test drive offer is different. No need to download gigabytes of installation files and no need to install the software, for that matter. The user runs the trial software from a browser or a thin client. And, Autodesk manages instances of the software in a VDI environment. Autodesk Inventor and Revit are among the titles made available in that fashion.

The AutoCAD giant’s VDI is setup on NVIDIA GRID, a GPU-powered virtualization solution from NVIDIA. Previously, the graphics demands of 3D CAD proved a roadblock in running such software titles on a remote machine or a virtual machine, because only GPU-equipped workstations could deliver an acceptable degree of interactive 3D use. But that changed when NVIDIA made it possible to virtualize the GPU through its Kepler architecture.

In 2012, when unveiling the details of Kepler at the NVIDIA GPU Technology Conference, NVIDIA CEO Jen-Hsun Huang said: “I want to announce a GPU that we can all simultaneously share. Today, we’re going to take the GPU into the cloud. For the first time, we’ve virtualized the GPU.” Now, on NVIDIA GRID hardware, virtual machines come with their own virtual GPUs.

“We can have a high degree of confidence in the recommendations we make to customers about virtualization in part because we are doing so much work in virtualized environments ourselves. We have quite a bit of in-house experience,” says Anthony Hauck, Autodesk’s director of product strategy, AEC generative design (Read “At Autodesk, VDI Inside and Out”).

The Revit team “drives the application through approximately 16,000 nightly tests simulating user interaction with the product. Often these tests use building models contributed by customers to enhance the realism and accuracy of the interactions. The tests ensure enhancements to the application do not cause any regressions in the associated functionality,” says Hauck.

Autodesk’s Revit VDI is powered by 12 blade servers, each with dual CPUs and 128GB RAM. Autodesk generally runs 10 to 12 virtual machines per server; therefore, 12 servers have the capacity to support approximately 120 to 144 virtual machines in operation. “We have full utilization of all machines a few times a day, depending on the quantity of additional jobs submitted by development as they check in new work,” Hauck says.

Zuken, a printed circuit board (PCB) design software maker, is also relying on VDI powered by Amazon Web Services (AWS) to make its CR-8000 software available for test-driving for interested parties. “CR-8000 demands a fair amount of computing power and 3D graphics. We went with AWS because we found out that some smaller providers couldn’t support 3D graphics. We selected the higher-end option when we picked our setup in AWS to facilitate 3D requirements,” says Craig Armenti, application engineer at Zuken.

Some of Zuken’s competitors have software that works in 2D, which doesn’t require intense graphics. But CR-8000 runs in 3D, allowing PCB designers to easily communicate with their mechanical counterparts. Therefore, the VMs supporting Zuken’s test-drive setup has to offer the equivalent of a dedicated GPU.

“For now, we’re limiting the test sessions to five at a time — five active licenses of CR-8000 at the most running on AWS,” says Armenti. “We don’t know when these VMs might come online. Some users test-drive the software after business hours when they’re at home. So if we are hosting these VMs ourselves, we have to worry about keeping the server up 99% of the time, ensuring Internet connection. On the other hand, Amazon has servers all over the U.S., and if we ever want to expand beyond the country, AWS gives us more flexibility than our own hardware.”

Splitting the GPU

Many software makers have refined their code to take advantage of the GPU’s presence in most workstations; therefore office productivity software, scientific calculation software, CAD modeling programs, rendering programs and simulation software can derive benefits from what’s known as GPU acceleration.

But the GPU is not always necessary for routine computing works, such as Web browsing, word processing, data entry and simple CAD modeling. When the GPU is sitting idle inside a workstation, it represents wasted computing potentials. Workload imbalance and changing peak demands often lead to some engineers craving for more GPU horsepower while others are barely touching theirs. The emergence of the virtual GPU addresses this scenario. Whereas the physical GPU attached to the workstation cannot easily be shared or reassigned, the virtual GPU can be easily shared and reassigned with a few clicks from the VDI console.

For engineers and designers working with heavy CAD models and advanced simulation programs, a designated GPU is recommended. It ensures each user gets the performance equivalent of a physical GPU even when working on a virtual machine. Workers who do occasional CAD work and do not need graphics acceleration all the time may be the ideal candidates for GPU sharing.

In the white paper titled “NVIDIA GRID: Graphics Accelerated VDI with the Visual Performance of a Workstation,” Alex Herrera, a senior analyst from Jon Peddie Research, wrote: “GPU Sharing is a reasonable solution for many, but not an ideal solution for all. It can perform effectively with simple applications and visuals and support concurrent users (CCUs), but the extensive compute cycles spent abstracting complex 3D rendering will add latency and reduce performance. Furthermore, the reliance on API (application programming interface) translation means 100% application compatibility is impossible to guarantee.”

At VMWorld 2015 in August, NVIDIA’s rival AMD announced that it will also begin offering a GPU-sharing solution. “An AMD graphics card equipped with our Multiuser GPU technology offers consistent, predictable performance. IT managers can easily configure these solutions to allow for up to 15 users on a single GPU … Each user now has an equal share of the GPU to allow them to design, create, and execute their workflows,” the company states.

In its press announcement, AMD called its Multiuser GPU technology the “first hardware-based virtualized GPU solution.” This could be somewhat confusing as “virtualization” is usually interpreted as a software-driven mode. “Our solution is hardware-based. The GPU sharing component is in the GPU itself, so it’s better than the software-driven approach,” says Antoine Reymond, senior strategic alliance manager, AMD.

AMD further explains in its data sheet: “Software virtualization has traditionally been a limiting factor for those who want to fully utilize GPU hardware acceleration for compute tasks under Open CL. With AMD’s implementation of the new Multiuser GPU, users are no longer as limited to what they can or can’t do in a virtualized environment. Users will have access to native AMD display drivers for OpenGL, DirectX and OpenCL acceleration, enabling them to work with few if any restrictions.”

AMD’s solution is much closer to the metal (literally) than its competitor’s version. “The AMD host driver adds very little burden to the hypervisor,” says Tonny Wong, product manager at AMD Professional Graphics. “It is used only to set up the parameters for Multiuser GPU functionality in hardware … AMD solves virtualization in silicon using a PCI-SIG specification called SR-IOV. All that the AMD host driver is doing is telling the hypervisor that we want to split one GPU to many and set up the parameters for that split. All the GPU sharing happens under the hood and is transparent to the hypervisor. The simplicity of the solution is that, when Multiuser GPU is not configured, the product is supported as a pass-through device. Once Multiuser GPU is enabled, the GPU will appear as multiple pass-through devices,”

For knowledge workers who juggle Microsoft Office-type applications, a single AMD GPU may be shared by up to 15 users. But for typical CAD users, AMD recommends sharing the GPU among 6-10 users. To retain the desktop workstation-level performance, AMD recommends sharing among two to six users.

At the same event, NVIDIA also began promoting GRID 2.0, the second generation of its GPU-accelerated VDI solution. The company says GRID 2.0 “doubles user density over the previous version,” indicating the same number of GPU can now support a larger pool of users without performance loss.

The virtual machine is distinctly different from its predecessor network computer in one aspect: Professional-grade graphics with little or no noticeable latency. This development removes one more barrier that once prevented CAD and CAM users from venturing beyond their cubicles and the office.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Kenneth Wong's avatar
Kenneth Wong

Kenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at digitaleng.news/facebook.

      Follow DE
#14421