Engineering Computing News
Engineering Computing Resources
October 1, 2014
Editor's Note: This is the fifth part of Desktop Engineering's series on high-performance computing options. Read the other articles in this series here.
Even in the last few years, high-performance engineering computation (HPC) was a relatively straightforward proposition: You got the fastest workstations you could afford, with multiple cores and a lot of memory. You ran parts of your work locally, or ran an analysis at low fidelity. For final work, you sent your job to a cluster for the best balance of turnaround time and fidelity of results.
Because of all the available options today, though, setting up an HPC platform can become an expensive and time-consuming process. In many cases, the best workstations are powerful enough to take on an increasing share of the CAD, analysis and rendering workload. And their flexibility is such that engineers can often gain more insight into their designs at an earlier stage in the design process than in the past.
But server clusters are still the workhorse of engineering groups; enterprises with one or more good-sized groups have likely invested in cluster technology. To fully utilize what in many cases is existing hardware and licenses, engineering groups are likely to keep feeding important jobs to a cluster if available.
Send in the Cloud?
The cloud is increasingly becoming the wild card in the design process. There is the potential to rent time with your favorite CAD or analysis tool, and perform your work faster and less expensively than if the software and hardware were on-premises.
The economics are compelling, and often a cloud solution can be both powerful and flexible. For software vendors, it makes it possible to reach more potential users with a greater variety of solutions.
That said, the cloud has issues that make it less than an ideal alternative for every situation, at least at this point in time. There are a couple of different strategies of leveraging the cloud in engineering work.
One is to rent cloud time for all computations beyond the desktop, forgoing an on-premises cluster. But configuring a CAD or analysis software package for cloud execution involves challenges. The software and payload have to be configured as a virtual machine (VM), and sent off to the cloud as a single file. If the intent is to run it on multiple machines in the cloud, users have to manually set up communications among the VMs to share processing power and data. For many, the uncertainty of success and the amount of work involved can be a deterrent.
Licensing may also be an issue. First, unless you own enough licenses, you may not be able to run your job on the optimum number of servers. Second, even connecting to the licensing server from the cloud could be problematic. According to Silvina Grad-Frelich, HPC marketing director at MathWorks, accessing a licensing server from within a cloud environment requires special configurations. She says the company is currently testing a beta solution.
Organizations dependent upon their intellectual property are also concerned about the security of computation and data storage in the cloud. If the cloud environment or network connection could be compromised, new designs or simulation data could be stolen or held hostage. There is still significant distrust in letting cutting-edge designs outside of the firewall. Rob Winding, an engineer at consultancy Design Solutions, notes that the best designs are kept under lock and key: “The check-out and check-in processes for designs tend to be pretty rigorous, in part for security.”
All of these potential problems are solvable, but they can be technically difficult. An increasing number of design and analysis software vendors are building partnerships with the major cloud providers or launching their own services, enabling their software to work far more seamlessly with that particular cloud. While it may not be the preferred cloud provider for the organization, there are a growing number of tools vendors working with cloud partners to give engineers options for using only the computing resources that they need.
Start with the Workstation
For either CAD or analysis and simulation, the fastest modern workstations offer the best choice for convenience. A designer or engineer can kick off a job at any time, providing a short turnaround time for new ideas. Using a virtualization product like Parallels, engineers can assign memory and cores to that job—giving them some control over how long it takes, while also letting them do other work on the same computer. Engineering groups can also use Parallels to tie together multiple workstations, and use their excess capacity to drive analyses that can make use of multiple cores and systems.
But there are a couple of limitations. First, working in this fashion requires a very high-performance and most modern (and expensive) workstation. A mid-range model from two or three years ago isn’t going to be a lot of help. Second, even a state-of-the-art workstation with top-line graphics and lots of cores and memory won’t suffice for some final work. Some very high-fidelity designs and simulations require more time or horsepower than a workstation can contribute.
That doesn’t mean that the workstation isn’t useful, though. In cases involving simpler designs, workstations and small workstation clusters could reasonably be used from inception to final design. When designs are more complex, workstations can be used to model individual components and smaller subsystems of larger designs. Connecting them into small, ad hoc clusters can increase the processing power available.
Relying on the Cluster
A compute cluster is an expensive investment for many small engineering organizations. In addition to the hardware, it requires skilled IT personnel to manage and maintain. But for many mid-sized and larger organizations, they remain the core of the analysis and simulation operation. They are much more powerful than individual workstations, yet still keep the data and designs inside the firewall.
The cluster isn’t going to go away anytime soon, but it is changing. Rather than multiple expensive servers, clusters can range from small groups of workstations to a number of interconnected systems with at least 32 cores each. The flexibility of configuration and the ability to serve many different computing needs makes a cluster an important tool for mid-sized and larger engineering organizations. As long as the horsepower can remain fully utilized by multiple groups and projects, clusters of some type will carry the bulk of the rendering, analysis and simulation work for many engineers.
The Inevitability of the Cloud
Ultimately, it is likely that much more engineering design and computation will occur in the cloud. For organizations with only occasional HPC needs, the cloud is enticing. While most license prices will likely remain high for a while, available supply will likely push down prices for occasional use in the future.
It’s difficult technically now, because most commercial software packages weren’t developed for remote execution and virtualization. Retrofitting them to the cloud poses technical challenges. Those problems are on their way to being addressed, though, and in not too many years engineers may not know whether they are dispatching to local clusters or remote data centers.
However, engineering organizations remain wary of cloud security. In many cases, the intellectual property inherent in new designs can be worth a great deal of money, and companies can be reluctant to use public computing resources.
In response to these concerns, cloud providers are working to better secure both servers and data transfers between the data center and customer. This includes automatic encryption, and in some cases, dedicated VPN connections between the organization and data center.
Is There a ‘Best’ Solution?
CAD is still largely an individual engineering activity, performed on workstations. However, in some cases complex drawings, especially complete designs, are rendered on servers or supercomputers. In some cases, this handoff can slow down the design process—to the dismay of engineers waiting in a queue.
Analysis and simulation are performed less often on the desktop, primarily because of the computing horsepower involved, and because many engineers don’t have the latest and fastest workstations.
But there are more ways to offload some of that processing onto workstations. An increasing amount of simulation is occurring on HPC workstations, and workstation clusters. It allows for a higher level of interactivity than other platforms, as engineers are able to examine results, change a few variables, and kick off a new run immediately, rather than having to schedule an entirely new job. The fidelity likely wouldn’t be as good as on a dedicated cluster, but the turnaround time for different types of analyses can make it a valuable resource.
While there is a lot of interest in cloud computing today, the technology, products and design initiatives are still evolving. But for at least occasional computational jobs, the cloud represents a lot of horsepower for relatively little money.
Anyone looking for a single and definitive solution today is doomed to disappointment. However, those engineers who remain flexible will see the trends working in their way over the next several years. The server cluster may be de-emphasized as the workhorse for serious engineering computing, but will remain an important solution for the foreseeable future. Faster workstations, workstation-based clusters, and cloud solutions are starting to fill in the gaps, resulting in more iterative design solutions.
About the AuthorPeter Varhol
Contributing Editor Peter Varhol covers the HPC and IT beat for Digital Engineering. His expertise is software development, math systems, and systems management. You can reach him at [email protected].Follow DE