Workstation, Cluster or Cloud?

Choosing where to run complex computational models has become a choice driven by more than just costs. Today, many factors come into play when choosing from among the traditional workstation, onsite computing grid or cloud services.

Choosing where to run complex computational models has become a choice driven by more than just costs. Today, many factors come into play when choosing from among the traditional workstation, onsite computing grid or cloud services.

Engineers today have more choices than ever before, especially when it comes to computing. A robust ecosystem of technology solutions that transcend traditional computing barriers have become readily available—and more importantly, affordable. Simply put, engineers can pick and choose where to run their simulations, animations and design applications based upon their needs, not their pocketbooks. Let’s take a look at some of the options available today, and what a variety of computing professionals have to say about those options.

Lenovo

Modern desktop and mobile workstations are powerful enough to handle most of the tasks design engineers throw at them. Image courtesy of Lenovo, screen capture courtesy of Autodesk.

Workstation Workhorses

High-performance computing (HPC) workstations have long been residents of engineering and design departments across many types of businesses. These faithful adjuncts have supported CAD/CAM design applications, simulations and visualizations, while also serving as the repositories of complex calculations.

Truth be told, those very workstations have been a boon to engineering productivity, and have made it easier for engineers to delve into new frontiers. What’s more, those engineering workstations have evolved—gaining more processing power while dropping in price. The result are advanced capabilities that are only slightly more expensive than traditional desktop PCs, allowing workstations to take on many advanced engineering roles.

Jon Wells, senior designer at the Morgan Motor Co., a British automobile manufacturer, notes that “the power of the latest generation of workstations are allowing us to do more and more design work from the desktop, eliminating the need for expensive, dedicated CAD/CAM systems.”

What’s more, the growth in processing power and the compact nature of the latest generation of workstations bring additional advantages with them, such as scalability and portability. Art Thompson, vice president of Sage Cheshire Airspace and leader of the team responsible for Felix Baumgartner’s 24-mile skydive last year, reports that “most of our day-to-day operations are performed on workstations, especially since we can pack those up and bring them out into the field when needed for last-minute simulations or to capture data.”

With the latest enhancements and the incorporation of solid-state drives (SSDs), caches and ultra-high performance video cards, Thompson adds that his team is finding that “we can standardize on a given vendor’s workstation to process our workloads and eliminate the need for specialized hardware.”

Going On-grid

Nevertheless, there are scenarios where underpowered workstations won’t cut the mustard, leaving engineers to seek alternatives that deliver even greater processing power. This has spurred a growing interest in grid computing, which is a way to enlist large numbers of machines to work on multipart computational problems such as circuit analysis or mechanical design.

“Certain simulations are beyond the scope of the typical workstation and require banks of CPUs to deliver a result, which is where the grid concept comes into play,” Thompson explains.

Morgan’s Wells agrees. “Some simulations, such as aerodynamics, mechanical component stress testing and 3D animations require much more power than a workstation can deliver,” he adds.

Leveraging grid computing has garnered significant attention among scientists, engineers and business executives. Grid computing excels at solving complex mathematical problems, and is a technique that is one of the latest developments in computing, which has already delivered such advances as distributed computing, collaborative computing and the Web.

However, grid computing is not all that new. In fact, it has been in use for several years—allowing businesses to discover there can be high equipment and operational costs associated with the technology. In the past, a grid-based system may have been the only way to solve certain engineering problems, but others are now turning to hosted and scalable resources to maximizing productivity, while minimizing costs.

Grant Kirkwood, CEO of Unitas Global, a cloud services and hosting firm, gives a good example of seeking alternatives: “We have a film studio as a customer, which had set up a grid for animation and FX work in their studio. The grid grew in size, expense and created several heat and power problems. The film studio switched over to our hosted offering, and ]as a result] eliminated the problems they were encountering and gained instant scalability.”

Perhaps scalability is only one of the keys to leveraging Platform-as-a-Service (PaaS) or Infrastructure-as-a-Service (IaaS)—at least that seems to be a common belief.

“German car company BMW recently moved its HPC operations to our Keflavik, Iceland, data center for a multitude of reasons,” notes Jeff Monroe, CEO of Verne Global, an international data center operator. “The most significant reason is that BMW expects to save around 80% of the power costs of running calculations, including crash test and aerodynamics simulations, as well as CAD/CAE calculations by using our data center.”

Echoing that thought is Morgan’s Wells, who notes that “offshoring HPC makes a lot of sense, as long as there are tangible savings and no latency problems.”

But savings may only be part of the story. BMW tested the network connections from Munich to Iceland, and Monroe reports that “the test results were a critical factor in their decision to place production systems in Iceland.” The move was also predicated by emissions concerns. With a big surplus and reliable long-term supplies of renewable energy, Iceland’s utilities offer very cheap deals and long-term contracts. Monroe says this is one of Verne’s core competitive advantages—and prices are guaranteed.

“We can offer customers a low, inflation-protected rate for up to 20 years,” he reports, noting that it’s a significant consideration, “in light of rising long-term electricity costs in Europe, the UK and US.”

Head to the Clouds?

Cloud computing seems to be evolving into an HPC user’s dream, by offering a reasonable metered cost, which includes unlimited storage and instantly scalable computing resources. Nevertheless, HPC cloud offerings require extensive due diligence, simply because remote HPC services can be based upon shared HPC clusters, hybrid cloud offerings, fully virtualized cloud environments or other technological combinations.

generator

Make sure uptime is guaranteed by a host’s use of important subsystems, such as onsite generators shown here.

“Before moving to cloud-based HPC, one has to consider latency, scalability and usage requirements,” advises Unitas Global’s Kirkwood. “Luckily, several tools exist to vet those concerns—and of course, there is always a SLA (service level agreement) that spells out the expectations of the service, holding the provider accountable for failing to meet service goals.”

The tangibility of on-desk HPC can sometimes trump cloud and grid offerings, according to Philip Ra, vice president of Yazdani Studio, an architectural and design firm.

“Working with CAD/CAM designs in real time in a conference room environment proves to be a powerful capability that fuels ideas and enhances the customer experience,” Ra notes. “With that in mind, portability becomes a major concern and we leverage portable workstations to make that a reality.”

That said, there is an extensive ecosystem of hosts that are ready to customize HPC offerings to fit the needs of any given company. Take, for example, Open Data Centers, an organization that offers carrier-neutral co-location options.

“Our ability to continuously evolve our data center architecture and control processes allows us to meet the ever-changing demands of customers,” says Open Data Centers’ CEO Erik Levitt. “With 8,500 sq. ft. of scalable data center space, a 24-hour, on-site Network Operations Center, and N2 infrastructure, we can offer the choice, flexibility and responsiveness of a more personalized data center.”

serverIf choosing a hosting service, make sure the facility is neat, clean and secure.

While Levitt’s comments firmly fit under the guise of marketing, he does make a reasonable point. Today’s hosts are more than willing to build custom offerings on top of existing infrastructures, helping to shift the provisioning and management of HPC to an external resource.

Nonetheless, the question still remains: Should engineering firms invest in workstations, grid computing or hosted offerings? While there is no easy answer to that question, there are several rules of thumb that can make the selection process more navigable. Questions to consider include:

  • What applications need to be supported? For example, CAD/CAM applications such as AutoCAD, BricsCAD, IntelliCAD and several others are designed for workstations running Microsoft Windows. Other applications may run under Linux, Solaris and so on, which has an impact on the type of computing environment needed.
  • What type of output is expected by the application? Will the output be used to drive 3D printers, computer numerically controlled (CNC) equipment, plotters or presentation systems?
  • How much processing power is needed? Does the processing power need to scale occasionally, frequently or never?
  • Which fits the business model better, capital expenses or operational expenses? Each has its pros and cons, and are often decided on a project-by-project basis.
  • Is there a baseline configuration used for each and every project? Or do HPC requirements change on a project-by-project basis?
  • Is there sufficient staff on hand to support the computing environment? Grids need maintenance, and workstations need management. Is the business capable of handling those needs internally?
  • What controls need to be put in place to guarantee uptime, meet business continuity needs or support disaster recovery plans? Some businesses can survive a few hours of downtime, while others must have continuity. Answers to those questions will drive the design of the infrastructure.
  • Do offsite operations need to be supported? Will engineers be working in the field? Will site offices be established? This drives the decision of whether a compute solution needs to function in isolation or requires some type of connectivity.
While the above considerations are just a fraction of what a complete computing design plan should include, they do offer a basic guideline that should help to narrow down what works best for any given HPC consideration.

Frank Ohlhorst is chief analyst and freelance writer at Ohlhorst.net. Send e-mail about this article to [email protected].

More Info

The Morgan Motor Co.

Open Data Centers

Sage Cheshire Airspace

Unitas Global

Verne Global

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


#1517