Speed Up Engineering IT

Virtualization promises a solution to the budget constraints and challenges associated with deploying new engineering computing hardware.

Virtualization promises a solution to the budget constraints and challenges associated with deploying new engineering computing hardware.

Virtualization has been the buzzword in IT circles for several years now. After all, what’s not to like? Simply put, virtualization allows IT managers to more efficiently use their existing resources, maximizing CPU utilization and abstracting compute, storage and infrastructure from the physical realm while easing management at the same time.
NVIDIA

The NVIDIA GRID Visual Computing Appliance (VCA) allows users to create virtual machines called workspaces, which NVIDIA says are effectively dedicated high-performance, GPU-based systems.

However, there are those looking to extend the promise of virtualization, and pushing the envelope, at least when it comes to high performance computing (HPC), remote computing and the maximizing of simulation resources.

Virtual, High-Powered Machines

Vendors and developers have come to the realization that virtual representations of HPC systems can be deployed rapidly and offer remote access, without the need for HPC resources located at the customer’s site. That has created a boon for high performance virtualization, where expensive HPC systems can be remotely accessed and custom configured for a short period of time, making it much more affordable, yet still offering needed compute resources on demand.

Ultimately, virtualization promises to lower the entry point into HPC utilization, bringing the technology to a vast array of businesses, which previously did not have the financial resources to invest in the needed hardware. Nowhere in engineering will that benefit be realized more than in advanced simulation chores, where grids of systems were often required to perform tasks in a timely manner.

However, the benefits offered by virtualization do not end with remote HPC, several other use cases prove the technology offers value for engineering and design firms, where stretching the performance of systems and doing more with less are the orders of the day. Virtualization can be used to maximize CPU utilization by running several virtual machines on a single high performance system. In effect, doing more with less, while utilizing those normally discarded CPU cycles.

An entire market segment has become devoted to virtualization, with dozens of vendors offering hundreds of products and services designed to bring the technology to most any business. However, that cornucopia of virtualization variety creates another problem: What technology to use?

Each vendor will tout its own virtualization platforms and subsequent ecosystems—some will leverage the service/hosted model, while others will associate virtualization with the cloud, and others may push the on-sight ideology. Yet, different solutions are designed to handle different problems, so the first step on the path to virtualization is to ask what problem needs to be solved. That fragments the market into three distinct segments, each with its own capabilities, needs and solutions.

Access HPC on Demand

Take for example companies that need part time access to HPC offerings. Those companies will be looking for a pay-as-you-go offering that minimizes costs, yet scales up to meet demands of individual projects. Those businesses are best served by cloud/hosted offerings that can offer remote access in to virtualized infrastructures/compute farms/storage subsystems and so on.

The needs of those businesses may be met by service vendors, such as RackSpace, Amazon, Peer1, and others. For example, RackSpace offers an HPC Cluster in a Cloud Environment, which incorporates a technology called Open MPI (Message Passing Interface) that supports the threading of HPC applications across a cluster. RackSpace allows their customers to self-provision HPC clusters for remote access using multiple Virtualized Rackspace Cloud Servers.

Amazon and Peer1 offer similar ideologies, where the virtualization layer becomes basically a transport layer to connect physical servers to virtual machines for HPC processing. However, there are some other vendors that take a different approach, aiming to ease the burden on customers by handling the provisioning and maintenance of Virtualized HPC offers. Cases in Point include companies such as NIMBIX and IBM’s SoftLayer, which offer private cloud types of virtualized HPC clusters. Those vendors offer solutions that are somewhat preconfigured and fully managed, allowing customers to focus on compute jobs and not ancillary issues.

Maximize Computing ROI

For many businesses, virtualization is about maximizing the value of what they already have—in other words, increasing the return on investment on systems already purchased.

One way to do this is by extending the reach of internal HPC resources out to remote workers and satellite offices. By combining web access with virtual machines, those businesses can deliver a full HPC environment to a distant engineer, without having to invest in additional hardware. That scenario also extends to mobile workers as well, where a user can access an HPC virtual machine on a tablet, notebook computer or even a smartphone—eliminating the need to bring expensive (and cumbersome) hardware out into the field.

Those scenarios are typically powered by leading technology vendors such as VMware, Citrix, Microsoft and Parallels, who offer virtualization platforms that abstract the hardware from the virtual machine. However, most adopters are finding that those basic platforms are not enough and have to assemble ecosystems that support VDI (Virtual Desktop Infrastructures). Vendors in that space include LiquidWareLabs, a company that provides pre-assessment, deployment and management tools for VDI implementations. Other vendors include LUCIDLOGiX, Syncron, Quest, Ericom and Moka5. However, remote VDI capabilities doesn’t end with just software vendors, there are others offering a hybrid solution, which pairs a client device with VDI software. PanoLogic offers its “zero client,” a small Ethernet attached device that presents a virtual machine to users—no local CPU needed. The device simply attaches the desktop peripherals to the remote virtual machine via an Ethernet connection.

Dell’s recent acquisition of Wyse Technologies brings another solution to play, a terminal device that works much in the same way as Pano’s offering. nComputing is yet another vendor offering a combination of virtualization and remote desktop access—the company offers a multiuser card that is installed into a PC and then uses Ethernet to deliver the experience to a small client device that powers a terminal setup.

Other technologies also are available that do a good job of enabling remote access to virtual systems. Take for example Teradici, a company that offers a card that plugs into a host system and then delivers the host’s capabilities to a remote, dumb terminal—while not exactly a virtualization product, it does abstract the user’s desktop from a high performance system, allowing those HPC systems to be accessed remotely.

Boost Performance

The third virtualization scenario often found in the engineering space focuses on performance and not so much remote access or cloud enablement. Here, engineering firms are using virtualization (both hardware and software) solutions to maximize performance and minimize wasted CPU cycles.

One major player is NVIDIA, with its Grid Series of add-on boards that are designed to allow virtualized systems to offload graphics processing to a dedicated GPU, improving performance, responsiveness and usability of virtualized systems working with intensive graphics. Ideally, a GRID board can use a single high performance workstation as a host for virtual machines, allowing several users to share the processing power of that system, while not experiencing drops in graphics performance.

The key driver of virtualization seems to be the need to centralize HPC, making it easier to manage, scale and provision based upon project needs and not assumptions. That has led to the adoption of high-end virtualization systems, which operate by accumulating and assigning multiple cores to a single machine. The single virtual system aggregated from multiple physical machines proves to be a powerful management tool that helps the IT department increase its efficiency, as well as the efficiency of engineering computing.

Frank Ohlhorst is chief analyst and freelance writer at Ohlhorst.net. Send e-mail about this article to [email protected].

More Info

Amazon

Citrix

Dell

Ericom

IBM’s SoftLayer

LiquidWareLabs

Microsoft

Moka5

nComputing

NIMBIX

NVIDIA

Parallels

Peer1

RackSpace

LUCIDLOGiX

PanoLogic

Teradici

VMware

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Frank Ohlhorst

Frank Ohlhorst is chief analyst and freelance writer at Ohlhorst.net. Send e-mail about this article to [email protected].

Follow DE
#1635