Engineering Computing News
Engineering Computing Resources
November 1, 2014
As engineers seek ever-more-efficient ways of using high-performance computing (HPC) resources in running analyses and simulations, the number of available alternatives is greater than ever. In particular, the cloud is emerging as an intriguing option, especially for those groups not needing dedicated HPC support.
But even with running jobs in the cloud, there are choices to be made. When most think of the cloud, they envision a location unknown, but offsite. In reality, there is another option: a so-called “local cloud.” Also known as a private cloud, this is typically a hardware appliance that serves up engineering applications locally to all engineers in the organization.
Why have a private cloud appliance? In many cases, the answer is obvious. The engineering data remains within the organization’s firewall, where it can usually be secured more easily and reliably than with a commercial public cloud. While many public clouds offer higher levels of security, with encrypted data transfers, many engineering groups are simply reluctant to let proprietary design information offsite, out of the local network.
Local appliances can also provide HPC resources with a low systems management overhead. Appliances that offer full turnkey solutions, complete with analysis and simulation software applications, can effectively be almost free of system administration requirements.
While these appliances are powerful systems, they aren’t clusters—which can have far more horsepower, but can sometimes demand larger and more complex IT management resources. For larger engineering organizations, they won’t replace traditional clusters from a sheer performance standpoint. Where they can make a difference is in smaller groups that are using individual workstations, or for groups that cannot fully utilize a cluster. Private clouds can also be useful in larger groups that have occasional needs for more computing horsepower than their cluster can provide.
Plain Appliance or Turnkey Solution?
Most of the large traditional system vendors, such as IBM, Lenovo, HP and Dell, offer appliances that integrate compute, storage, networking and system management into a single box. With the appropriate software licenses, engineering groups or a systems integrator can install their own analysis and simulation software on these appliances for use by the group. For those engineers with non-standard or unique application needs, starting with a commercial appliance and building out a solution is another way to share applications among an entire team.
An emerging solution is Microsoft Azure, which the company expects to be delivering in an appliance in the near future. Microsoft has already announced an Azure appliance for storage, archive and disaster recovery, called StorSimple, and is expected to follow that up with compute appliances. However, it’s important to realize that StorSimple is actually a hybrid solution, with an appliance and the ability to offload to the Azure cloud. Should Microsoft also take that route with a compute appliance, it could offer some interesting advantages over a strictly public or private cloud alone. Teams could reap the advantage of sharing a common platform with exactly the applications they need, while also being able to offload archived data offsite.
For those whose software needs are satisfied by a single engineering applications vendor, it can be easier if that vendor offers an appliance, either as a systems integrator or in partnership with an HPC systems provider. One example of that type of solution is the ANSYS-IBM partnership to provide the full ANSYS suite on an IBM server appliance. One open question is whether that partnership will survive the sale of IBM’s X86 server business to Lenovo. That proposed transaction was approved by the U.S. Treasury Department in August, and Lenovo expects the purchase to complete by the end of the year. Barbara Hutchings, ANSYS’ director of strategic partnerships, still expects the partnership to continue after the acquisition.
Another fully turnkey solution is Altair’s HyperWorks Unlimited. This is a private cloud with fully configured hardware and software, offering unlimited use of all Altair software on the appliance. The company says that it can be set up and running jobs in a few hours or less.
There are three different hardware offerings, ranging from six to 20 cores and 64GB to 256GB of memory. Altair uses SGI hardware, but sells and fully supports HyperWorks Unlimited itself, including the hardware. Like the software, the hardware is leased rather than sold, so ongoing support is an integral part of the lease.
It’s also possible to turn an existing cluster into a cloud, for better manageability. Penguin Computing’s Scyld HCA turns any HPC Linux cluster into a managed HPC hybrid cloud environment. Among the features it adds to the traditional cluster are providing virtual login host lifecycle management, object and data storage, user and group administration, metering and reporting, authentication, a Web-based portal, and a graphical user interface (GUI) for hardware and workflow management. If you’re not yet ready to move entirely to a cloud appliance, this approach could be a good initial step.
Not Just for Computation
NVIDIA offers a different twist on the private appliance. While NVIDIA has been building a reputation for high levels of computational performance, its appliance offering is explicitly designed for graphics processing and performance, focusing on rendering complex graphics. The NVIDIA GRID Visual Computing Appliance (VCA) runs engineering design applications and sends their graphics output over the network to be displayed on a client computer.
The NVIDIA GRID VCA is certified and supported by Dassault Systemes to virtualize and remotely deliver SolidWorks 2014 over the network. The NVIDIA GRID can also be used to accelerate AutoCAD designs and send the results to one or more clients. Right now, these are the only application partners available, but NVIDIA expects to be adding more in the future.
The NVIDIA GRID appliance consists of eight GPUs with a total of more than 23,000 compute unified device architecture (CUDA) cores, and 20 Xeon CPUs with hyperthreading. Each GPU has 12GB of memory, while the system memory is 256GB. It renders either application blazingly fast for up to eight concurrent users.
This solution provides flexibility for engineers looking for a fast way of doing design rendering while also spreading the cost out among a small engineering team. Conceivably, it can also be used strictly as a compute server, with applications that have appropriate NVIDIA GPU support, but that may not be a cost-effective use.
The Good and the Bad
In many of these alternatives, software licensing policies remain a stumbling block. Some vendors’ licensing policies are still complex, and weighted to be increasingly costly for more powerful systems. Groups that want to integrate their preferred application software with an appliance themselves have to understand, budget for and manage application licenses for the appliance.While local appliances enable a group to feel more in control of its computing resources than with a public cloud, with that control comes a higher level of awareness. For example, if a group decides it needs additional cores or memory for a particular job, those resources are easy and seamless to obtain from a public cloud.
With a private cloud appliance, however, it requires either upgrading the existing hardware or buying an additional appliance. Both are time-consuming and expensive endeavors. The appliance can be shared, so it’s still often a better solution than newer and more powerful workstations—but it still requires a significant budget. That’s why some teams are looking at them as supplements to existing resources rather than workhorses.
On the positive side, the use of an appliance can be more flexible and cost-effective than running the same applications on individual engineering workstations. If a small group of engineers can successfully make more efficient use of the computing power in a single device, it can deliver more power to individual jobs at any given time.
Manageability is also one of the key advantages of the private cloud appliance. For groups that have limited IT support, or have to perform their own system administration activities, the appliance can save their time for real engineering work. In most circumstances, you can simply hook it up, turn it on, configure it and forget about it.
And while some still worry about security, many public clouds offer advanced security features, including encrypted transfers and protected virtual machines (VMs) on partitioned servers. While there is no such thing as absolute protection, under many circumstances it can be about as safe as it is within the firewall.
The bottom line is that the cloud for HPC, and even for rendering, is gradually becoming more popular. This is especially true for organizations without dedicated IT resources, and those without ongoing analysis needs. Engineering groups looking to address growing HPC needs with flexibility should start looking to the cloud. Whether that is a public or private cloud depends on the unique requirements of the group. Each has advantages and limitations, so any group should have a good understanding of their needs before determining a direction.
About the AuthorPeter Varhol
Contributing Editor Peter Varhol covers the HPC and IT beat for Digital Engineering. His expertise is software development, math systems, and systems management. You can reach him at [email protected].Follow DE