Solving the Mystery of the Invisible Desktops

Mobility and intellectual property security and increased bandwidth nudge users toward virtual machines.

Intel envisions supporting server-hosted workstations with its processor-integrated graphics technology. Image courtesy of Intel.


Editor’s Note: This is the fourth part of Desktop Engineering’s series on high-performance computing options. Read the other articles in the series here.

Intel Intel envisions supporting server-hosted workstations with its processor-integrated graphics technology.
Image courtesy of Intel.

At Florida Atlantic University (FAU), if you pull the cord attached to the mouse used by a faculty member or a researcher, you won’t find a computer at its end. Instead, you’ll find it plugged into a box marked “Teradici,” slightly larger than a paperback novel. The device has no CPU, no installed application, no operating system. It has a PC-over-Internet-Protocol (PCoIP) card that facilitates communication with a virtual workstation housed elsewhere.

The absence of real computers on premise, however, doesn’t prevent FAU students, instructors, and researchers from running AutoCAD, SolidWorks, ANSYS and other professional design software.

At Roger Williams University in Rhode Island, students enjoy the bring your own device (BYOD) policy, even though the kind of software they’re using for the coursework typically requires the horsepower of a workstation. So what if one student brings a mobile tablet, and another a MacBook Pro to do exercises in AutoCAD and Autodesk Revit? It really doesn’t matter. iOS, Windows, Android—all are welcome. By downloading and installing the client software from Citrix, Students can remotely connect to their designated virtual machines, hosted on Dell PowerEdge servers equipped with NVIDIA Grid K1 and K2 boards.

At the engineering procurement and construction management firm SSOE Group, globalization ran up against data security. The huge data sets comprising detailed 3D models of commercial construction projects were nearly impossible to transmit through the web to teams located in China, Malaysia, Brazil, India and Singapore. Short of physically shipping a workstation preloaded with the proprietary building data, SSOE seemed to have no easy way to collaborate with overseas employees and contractors. But then the firm discovered VMware’s Horizon View software, a desktop virtualization solution.

Today, SSOE talents located in Toledo, OH; Mumbai, India; Shanghai and other disparate locations work on the same model from their own locations, using the devices they prefer. They meet virtually, in an IT environment built on Cisco Systems servers equipped with NVIDIA Grid GPUs. These setups are part of the virtualization trend challenging the long-held notion that you need to be sitting at your desk with a high-caliber workstation to do proper design and engineering work.

The Downscaling of Expert Workload

Jon Peddie from the tech consultancy JPR recently conducted a survey on remote graphics and virtualization, underwritten by a number of clients interested in the market.

“The hardcore superstar engineers segment is expanding very slowly, so the market opportunities aren’t growing rapidly there,” he says. “A lot of work is being pushed away from the superstars because they’re an expensive group of talent. It’s going to entry-level and mid-level engineers. Companies can justify spending, say, $10,000 on a superstar’s workstation, but perhaps less on an entry-level engineer’s. That downscaling of work—along with globalization, offshoring and the increased computing power that gives us more than enough to share—is the engine driving virtualization.”

John Janevic, MSC Software’s VP of Strategic Operations, agrees.

“We all once had this grand vision of putting simulation in the hands of designers,” he says. “In my opinion, that really hasn’t taken off the way the industry has hoped.”

The expectation has since been revised to be more in line with what’s possible, Janevic points out. Today, the push is to enable product engineers to take on the simulation experts’ workload by “templatizing” what the latter would do in their specialized applications.

This new approach can be seen in the way MSC Software’s client GKN Driveline, an automotive supplier, captured its simulation experts’ operations in a browser-based interface so the same tasks can be performed by non-experts from wherever they happen to be. The workflow template was built using MSC Software’s SimManager software for simulation process and data management. The so-called non-experts, Janevic clarifies, “may not have a Ph.D. in mechanical engineering, but they’re still engineers with degrees. To be accurate, they’re non-simulation experts.”

NVIDIA The NVIDIA Grid K2 board, designed to enable multiple users to share a single GPU, is a critical component of the GPU maker’s strategy to capture the virtualization market. Image courtesy of NVIDIA.

The GKN project alerted MSC Software that businesses are looking for ways to free expert-driven operations from the desktop machines where they typically take place. Mobility—in this case, access to the template from a variety of lightweight devices—is essential to this setup.

In addition to MSC Software’s efforts to ensure its traditional desktop software products are compatible with virtualization solutions, MSC works with partners such as NICE Software, which specializes in cloud-based desktop virtualization; and Rescale, which specializes in cloud-based simulation execution, to ensure it can meet market demands.

Would this shifting preference for virtual machines jeopardize the traditional workstation commerce? To the contrary, Intel’s workstation segment manager Wes Shimanek sees it as an expansion of opportunity, not a threat. “There’s always going to be that class of dedicated workstation users. They aren’t going away,” he reasons. “But virtualization allows you to create anywhere, and collaborate everywhere. It also allows workstations to go where they haven’t gone before, like the manufacturing floor.”

The Performance Question

Al Makley, Lenovo’s director of development for the ThinkStation product line, says, “The return on investment (ROI) for virtualization is currently murky where the ratio is 1:1. But it’s a little easier to justify with something like NVIDIA Grid, where you can have 2:1 or 3:1,” referring to multiple users supported by one piece of physical hardware.

With this approach, overseers have the option to adjust the compute capacity in each virtual machine’s configuration (CPU power, GPU acceleration, and memory) In some cases, it is identical to the physical workstation supporting it in the backend. IT managers refer to this as a 1:1 remote setup. But for enterprise bean counters, the real attraction of virtualization is in its ability to proportion the collective horsepower of the hardware into a number of theoretical machines, depending on the workload and the role of the user. The same approach also works as an on-demand delivery mechanism for second-tier engineers, whose intermittent involvement in the project doesn’t justify giving them each a dedicated workstation.

Professional engineers’ reliance on workstations is driven largely by the demands of the sophisticated software they operate. Siemens PLM Software’s NX, Dassault Systemes’ CATIA, PTC CREO Parametric, Autodesk Inventor, SolidWorks, and other industry-recognized design software titles are known to deliver peak performance on workstation-class machines. Therefore, to make virtual machines a viable option for engineers, virtualization vendors must be able to offer assurance that the virtual experience is comparable (if not superior) to using actual workstations.

Teradici Teradici’s schematic (from a company presentation) shows the variety of PCoIP solutions powered by its technology. Image courtesy of Teradici.

Mahesh Neelakanta, director of the FAU’s Technical Services Group, shares his experience. “If you have lots of program windows open and you’re dragging them across your screen really fast, you’ll notice some lag,” he cautions. But that’s a very deliberate move to test the limits of virtualization, not typical software user behavior. “Traditionally, engineers have their assembly open and they’re rotating it slowly. For those scenarios, there’s no problem,” he adds.

Centralized Intellectual Property

David Hoff, Intel’s GPU and visual computing strategist, singles out the need to secure sensitive IP as one of the driving forces for virtualization. “Customers are telling us IP protection is particularly important to them,” he says. “Data sets are getting so large. People collaborate 24/7 around the world on projects. Keeping the IP safe, and avoiding the need to synchronize it around the globe, is becoming an issue.”

Security and mobility—the ability to move around—seem to be the two driving forces for interest in workstation virtualization, agrees Hector Guevarez, Lenovo’s worldwide product marketing manager: “They want to maintain the data in one location, but give access to it to project teams around the world.”

When IP is passed from desktop to desktop during the project, a backend system is required to keep all different versions of the data (for example, a CAD file) synchronized. In virtualization, the data can reside on the same server or appliance supporting the virtual machines; therefore, project managers can better prevent accidental and intentional data leaks—at least, in theory.

FAU students, faculty and researchers use Teradici client devices to interact with virtual machines. The remote communication is made possible by a Teradici PCoIP client card embedded in the client device that sits at the user’s desk, and the host card that resides in the actual hardware housed elsewhere.

“Users can see ‘My Computer’ and a C drive, but there’s nothing they can do to it,” says FAU’s Neelakanta. “They can’t save data to it.” Instead, the data is saved by default to a shared common drive secured in the data center.

As this article goes to press, Dell and Teradici are getting ready to announce a new remote access software product at SIGGRAPH (Aug. 10-14, Vancouver). The advanced draft of the press release obtained by DE explains that the new “Teradici PCoIP Workstation Access Software for Dell Precision workstations gives mobile workers instant access to a rich remote computing experience—whether from a conference room, home office, or on-the-go.” By installing the Teradici client software to a lightweight device (such as a Windows tablet) and the host software to a Dell Precision workstation, you can remotely commandeer your workstation from the tablet.

Whether you’re using hardware- or software-based remoting from Teradici, PCoIP minimizes IP compromise, according to Teradici: “All user data and computing applications are transmitted as pixel-only images—not data.” This approach is fairly typical in virtualization. It also explains why server-hosted workstations may offer better IP security.

GPU Acceleration in Virtual Machines

Both CPU maker Intel and GPU maker NVIDIA recognize they have much to gain (or lose) from how they respond to the budding virtualization movement. Accordingly, both are working with hardware partners to deliver appliances (think of them as mini-servers) that can support workgroup- and enterprise-level virtualization.

Andrew Cresci, general manager of NVIDIA Grid products, points out that engineering design projects generate massive data sets that, until now, “were impossible to move from the data center to the end user. Utilizing NVIDIA Grid technology for remote visualization allows companies to locate compute resource with the data in the server room, so load times become nearly instantaneous. This means that end users, ranging from automotive and aerospace designers to oil and gas exploration analysts, have real-time access to their designs, both on- and off-site, without comprise in performance.”

In March, at the annual GPU Technology Conference (GTC), NVIDIA began pitching its Grid appliance as GPU-accelerated virtualization hardware. The advantage of the NVIDIA Grid, according to the company, is the ability to deliver virtual machines with the characteristics of GPU-accelerated professional workstations.

GPU acceleration in virtual machines has been a stumbling block in virtualization for years. In 2012, with the introduction of its Kepler architecture GPUs, NVIDIA declared it could begin facilitating GPU-accelerated virtualization. At GTC 2014, NVIDIA CEO Jen-Hsun Huang described virtualizing the GPU as “one of the greatest endeavors of [NVIDIA].” This May, NVIDIA launched a program to showcase virtual machines hosted on Grid, available to beta testers via Internet connection.

“Professional design applications from companies such as Dassault, Siemens and Adobe run on workstations, which almost exclusively rely on NVIDIA graphics to provide the performance and interactivity users demand,” Cresci explains. “With our vGPU innovation, many more users can now access high-performance graphics with full NVIDIA applications compatibility.”

Intel’s Hoff says he believes processor-integrated graphics (different from a separate GPU installed as a co-processor to the CPU), like the Intel Iris Pro Graphics embedded in Intel Haswell processors, is ideally suited for virtualization. “[The architecture] has a shared memory for the CPU and other dedicated devices that don’t have their own memory. In the remote usage scenarios, there’s less data movement,” he says.

Usually virtualization is made possible by a software component, like those offered by Citrix and VMware. “Intel’s VT-x virtualization technology is pretty much industry standard,” says Hoff. “That’s true of our Intel Xeon E5 and E3 processors with integrated graphics.”

Intel’s answer to the NVIDIA Grid may be seen in HP’s upcoming Moonshot system, described as the “New Style of IT” by HP. “This is the GPU pass-through model,” explains Hoff. “The Intel VT-d (virtualization technology for direct I/O) makes the processor-integrated graphics available inside the virtual machine.”

In early virtualization technologies, the GPU could not be shared. In other words, to support one virtual machine with GPU acceleration, you’d need to invest in one physical GPU in the backend hardware.

But in next-generation virtualization solutions, splitting the GPU is an option. For example, it’s possible for two users to share the horsepower of a single physical GPU, virtualized and distributed across the network. Therefore, administrators may redistribute the power of the GPU depending on who’s tackling graphics-intense workloads in a given period. For this purpose, Intel is developing the Intel GVT-d, a technology for graphics processor virtualization.

Motivation and Challenges

Lenovo’s Guevarez acknowledges he’s seen an increase in virtualization among his customers, but expresses reservations about the conversion to actual implementation. Nevertheless, he says Lenovo is striking up partnerships with NVIDIA, Teradici, VMware, Citrix and other major players in virtualization, in case the market rapidly begins to adopt it.

Guevarez’s colleague Makley cautions that the emotional bond between engineers and their own hardware might become a hindrance to the switch to virtual machines. “That one-to-one bond between the users and the hardware that sit right by them is very strong,” he warns.

JPR’s Peddie outlines the challenges for virtualization vendors. “You have legacy systems, multiple OS developers, different hardware supplies with their own drivers, and roughly half a dozen brands of CAD software,” he says. “The people developing virtualization solutions are confronted with a difficult choice. If they have to prioritize one approach for maximum economic opportunity, which OS-CAD-processor-and-graphics configuration should they tackle first? While they’re contemplating that, the technologies are still moving ahead.”

From the end user’s perspective, Intel’s Shimanek says, “the experience is most important. They want to know, ‘Can I get my workstation experience with a virtual machine?’ Our early tests prove, yes, they can. The next determining factor would be the efficiency of the solution.”

Some of the considerations in the formula for efficiency may include the number of virtual machines a single hardware solution (like the NVIDIA Grid or the HP Moonshot) can support, the power consumption of the unit, and the ease with which administrators can manage the hardware.

More Info

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Kenneth Wong's avatar
Kenneth Wong

Kenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at digitaleng.news/facebook.

      Follow DE
#12935