A Technology in Flux

Part 2: While hardware is poised to deliver tremendous computing power to engineers, software might be holding it back.

Part 2: While hardware is poised to deliver tremendous computing power to engineers, software might be holding it back.

By Tom Kevan

 

The parallel processing capabilities of software applications are not all created equal — some types adapt to this form of computing better than others. For example, computational fluid dynamics packages have been able to achieve high levels of scalability, with some running acrossas many as 256 cores. Image courtesy of International Truck and Engine.

Today’s high-performance computing (HPC) hardware can deliver tera-FLOP-level computing power. The latest generation of supercomputers offers parallel processing that enables engineers to take large data sets and perform complex analysis overnight. But there’s a fly in the ointment: Not all software is up to the challenge.

  Scaling has not been mastered by all. Multidisciplinary studies are hampered by programs’ inability to work together. And the interfaces of CAE and CAD applications are different enough to deter the use of both in combination.

  It’s time to look at the problems.

Scaling Software
One of the greatest hurdles facing HPC is the lack of availability of software structured to take advantage of parallel processing. The burden of solving this problem rests with independent software vendors. Their challenge is to develop software that breaks the application processes down into smaller and smaller pieces to maximize the use of all the system’s cores.

 

This 4-million cell CFD benchmarkmodel fromANSYS demonstrates performance on cluster sizes up to 64 cores. Each color represents a partition for parallel processing, with each partition mapped to a separate core. Image courtesy of ANSYS, Inc.

The truth of the matter is that some applications are more easily adapted to parallel computing than others. Unfortunately, in design engineering, more software applications fall into the latter category. Some problems are inherently much more difficult to parallelize.

HPC systems performing CAE applications generally are able to take advantage of a small number of cores.

“It’s not that the applications aren’t big enough to deserve that kind of computing power,” says IDC Research Vice President for HPC Steve Conway, “but because the software doesn’t scale that well.”

  The CAE realm can be divided into two software domains:  computational fluid dynamics (CFD) and structural or finite element analysis (FEA).

 

Structuralsoftware is used to prototype systems that must perform in harsh environments.In this application, the program uses the finite volume method and polyhedralcells to perform stress analysis on an automobile engine cylinder head. (Imagecourtesy of CD-adapco.) 

“Computational fluid dynamics software applications traditionally have been able to achieve the highest scalability because of the physics they are trying to solve and the numerical algorithms they are using,”  says Knute Christensen, manager of the HPC Partners, Solutions & Segment Marketing Group at Hewlett-Packard. “For example, an application such as FLUENT from ANSYS can run across 128 or 256 cores easily and will scale very well in that environment.” 

It’s no secret that CFD codes use what is known as explicit technology, and this inherently lends itself well to parallelization. Many conventional structural analysis codes use implicit technology and, because of the algorithms and solution procedure used, an implicit code will never scale as well as an explicit code. While implicit codes can be broken down for parallelism, when you start adding cores, what tends to happen is speed gains begin to rapidly diminish.

“Today, we have processors with two or four cores, but after two and four come eight, 16, 32, 64, and so forth,” says IDC’s Conway. “So you are going to have a lot more parallelism in the hardware that the software is going to need to catch up with.”

“You are getting more cores in a socket,” says HP’s Christensen. “So the socket is getting somewhat constrained by the amount of data that can get into the socket. In the past, for example, FLUENT has been writing around a distributed memory parallel construct. It didn’t matter how many cores were in a socket because the view worked…. Now the first core in the socket is going to deliver a certain percentage, but the last core in the same socket is going to deliver a fraction of the performance in that kind of parallel construct.”

  As a consequence, application providers need to think of other ways to extract performance from the cores within a socket, or they need to rethink their pricing strategy, which is set on a per-core basis. If one core is delivering 100 units of performance and the last core is delivering only 20 units of performance, the user is not going to think it fair that the licensing fees for the two cores are the same because their performance is so different.

 

Boeing used virtual prototyping in developing the 787 Dreamliner aircraft. Thanks to HPC, Boeing performed destructive testing on only 11 prototype wings. Image courtesy Boeing.

Multiphysics
Many engineering projects require analysis in more than one field of study to obtain the desired information. This is a multiphysics problem.

  For example, many engineering firms serving the automotive industry must look at both CFD and structural analysis. A common application would involve the design of a vehicle’s side mirrors and whether they create noise that reaches into the vehicle’s cabin. And to reduce noise, a supplier might have to look at the flow of air, a fluid, over the side-view mirror,  which is a structure. It’s a study of the fluid-structure interaction. The difficulty is that the software applications (i.e., CFD and FEA) are designed separately, and they don’t always talk to each other well. They may even have different data-output formats.

  Another example of a multiphysics problem involved studies conducted after the destruction of the World TradeCenter towers. “When the World Trade Center collapsed, there was a lot of pressure to find out what happened inside the building to cause it to collapse,” says IDC’s Conway. “When they brought people from the National Institute of Standards and Technology (NIST)  into the building and told them to study it, they had a lot of physics to deal with. They had to deal with structural analysis, fire, and heat. Fire and air have fluidic properties. Well, the applications didn’t talk to each other, and the NIST engineers essentially had to do that manually. They got it done about as fast as anybody could, but it took a lot longer than it would have if the applications were designed to work together.”

  The difficulties posed by the lack of inter-software application communications arising in multiphysics problems are being addressed by software providers. As HPC systems are called on to solve more engineering problems, the pressure to eliminate this shortcoming will increase.

  CAD/CAE
As the linking of CAD and CAE continues to grow tighter, the issue of differing user interfaces becomes more prominent. In a nutshell, users don’t want to leave the familiar CAD environment to take advantage of CAE. The look and feel of the two environments, along with their tools, are becoming more similar, but moving between the two is still not as simple as moving from Word to PowerPoint, where everything pretty much has a similar user interface. That’s really not the case in moving data from CAD to CAE. “This is a big software issue that the vendors are constantly improving,” says Michael Schulman, HPC product line manager at Sun Microsystems.

 

FEA software is often used in motor vehicle prototyping. Here the program is used to analyze rigid body and flexible dynamics of the suspension system of a racecar.Image courtesy of ANSYS, Inc.

Applications
HPC has proven to be key in computer-aided engineering (CAE). “CAE is one of the real traditional HPC segments,” says HP’s Christensen. “In fact, it helped define HPC because it was so compute-intensive. The size and complexity of the problem was always limited by the compute capabilities rather than the engineer’s ability to craft the problem. As a result, to varying degrees, the applications have grown in the HPC environment, and they all run in these environments.”

  The workload can be broken down into three application areas: crash simulation or impact analysis, CFD, and structural analysis.

  Crash simulation supports virtual prototyping — the virtual design and simulation in 3D of a product and all of its components. This design technique significantly reduces the need for costly physical, or destructive,  testing, which can involve applying stress to an aircraft wing until it fails or crash testing automobiles.

  Boeing used virtual prototyping in developing the 787 Dreamliner aircraft. Thanks to HPC, Boeing performed destructive testing on only 11 prototype wings. For the prior generation plane, they had to perform 77 destructive wing tests. You can imagine the time and the money the company saved by reducing the number of physical prototypes needed to build and test.

  Another interesting example involves Whirlpool. In this case, the company found that an unacceptably high percentage of its washing machines were being dented between the factory and the retailer. And in that industry, it doesn’t take much of a dent or a scratch to render the appliance unsellable in a retail store. To address the problem, Whirlpool used HPC to simulate what was going on while the products were in transit. It not only identified the problem using HPC, but redesigned its packaging materials and the clamps that were used by its mobile network of distributors.

  CFD tests the movement of one material over another. An example of this type of application involves the design of new soccer balls every four years for the World Cup. The manufacturers use CFD and HPC systems to perform simulations to see how each ball behaves as it moves through the air. They discovered that the reduction of the number of panels and stitching has a large effect.

  Structural analysis also plays an important role in automobile design. It is used to identify and eliminate problems with noise,  vibration, and harshness as well as stresses to critical components.

  The software limitations discussed here are temporary as this arena promises to be the focus of great development effort and the locus of one of the next growth spurts for HPC technology.

More Info:
ANSYS
Canonsburg,PA
ansys.com

Boeing
Chicago,IL
boeing.com

CD-adapco
Melville, NY
cd-adapco.com

Hewlett-Packard
Palo Alto,CA
hp.com

IDC
Framingham,MA
idc.com

Microsoft Corp.
Redmond,WA
microsoft.com

National Institute of
Standards and Technology
Gaithersburg,MD
nist.com

Sun Microsystems
Santa Clara,CA
sun.com

Whirlpool Corp.
Benton Harbor,MI
whirlpool.com


  Tom Kevan is a New Hampshire-based freelance writer specializing in technology. Send your comments about this article to [email protected].

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#8477