Finding the Speed for Multiple Designs

How to simulate multiple design scenarios in high-performance computing environments.

How to simulate multiple design scenarios in high-performance computing environments.

By Jason Ghidella


Simulation is a core enabling technology for successful organizations using model-based design. By simulating software models for multiple scenarios, engineers can explore innovative designs and gain deeper insight into system behavior early in the development process “and before physical prototypes of the system are available. This approach enables engineers to develop complex embedded systems efficiently and cost-effectively. The ability to run multiple simulations also helps organizations meet quality and robustness goals with techniques such as design optimization, design exploration, robustness studies, Monte Carlo studies, parameter sweeps and bit error rate (BER) calculations.

Depending upon the complexity of the model, the number of simulations to be completed, and the frequency of the task, simulation time can become a critical bottleneck in the development process. An approach that leverages the computing power readily available in todays multi-core desktops and general-purpose, commercial off-the-shelf (COTS) clusters can overcome this limiting factor.

Case in Point
Aerodynamic drag forces can significantly reduce the fuel efficiency of a vehicle. Typically, vehicle body design is an iterative process, in which trade-offs between performance and body style considerations are made. The iterative nature of the development process affects engineers across all phases of development.

In the initial phases, for example, powertrain controls engineers will not have exact knowledge of the vehicles aerodynamic parameters. To overcome this uncertainty, they can use the executable specification of the control system and model of the vehicle to conduct a simulation study on a range of parameter values. This enables them to assess the robustness of their design for multiple body geometry variations. The results of such studies are invaluable for management, as they make critical decisions on the vehicle style while balancing performance and fuel economy needs.

MathWorks

Figure 1 shows a low-fidelity automotive vehicle model built in MathWorks Simulink and chosen specifically for the illustrative purposes of this article. Across a fixed drive cycle, the vehicle speed will be simulated and studied for a set of drag coefficients that varies from 0.2, which is typical of a sleek electric car concept vehicle, to 0.6, which is typical of a large truck.  

Intuitively, one might approach this parameter sweep by simulating drag coefficients at 0.2, 0.3, and so on. However, this would risk missing potential design candidates, as even a 2% change in aerodynamic drag can affect fuel economy by as much as a 1%. In addition, engineers will also want to evaluate performance robustness across parameter variations.
To address these issues, the study was conducted with much finer granularity. Specifically, the aerodynamic drag was increased in steps of 0.4% from 0.2 to 0.6, resulting in 512 design scenarios and providing a dataset large enough to ensure confidence in the resulting vehicle design.

Because a low-fidelity model was used, a drive cycle spanning 1,000 seconds took, on average, less than 2 seconds to simulate on a single core machine. However, because 512 simulations were required for this study, the total time needed to complete all simulations serially on the same machine was approximately 730 seconds, or more than 12 minutes. For a more realistic, higher-fidelity model in which each simulation takes 10 to 15 minutes to run, the study would require four days to complete.

Because the simulation scenarios are independent of one another, they can be executed concurrently on multiple processing cores. Parallel Computing Toolbox provides a scalable solution to solve computationally and data-intensive problems by distributing them on multi-core and multi-processor computers, as shown in Figure 2.

Using the PARFOR command, an intuitive high-level construct that parallelizes FOR loops in conjunction with functions from the Simulink application programming interface (API), engineers can construct a MATLAB script to automate the process. Engineers save time because they can program this within MATLAB, instead of with complex cluster software.
This case study was conducted using a computer cluster comprising 16 quad-core computers, for a total of 64 processing cores. The number of available cores was varied to evaluate its effect on the overall simulation time. For a two-core setup, the simulation was completed in approximately 380 seconds. The resulting 1.9-fold speedup is close to the best possible outcome of a two-fold speedup.

MathWorks
Figure 3: Vehicle speed profiles for the given drive-cycle
for aerodynamic drag coefficients varying from 0.2 (blue)
to 0.6 (green) show that the largest effect of drag occurs
during the cruise phase.

A plot of the vehicle speeds for the standard drive-cycle at different aerodynamic drag values is shown in Figure 3. The graph shows some interesting properties for the automatic transmission controller used in this design. The highest decline in vehicle speed performance because of drag was about 10 mph during the cruise phase (shown on the right of Figure 3). Drag did not have as much influence during the acceleration phases, and the braking phase actually benefited from the additional drag. These results indicate that the controller design is robust to changes in aerodynamic drag, as the speed profile varies smoothly.  

Simulation times for 4, 8, 12, 16, 24, 32, 40, 48, 56 and 64 cores are shown in Figure 4, where the overall time to complete the 512 simulations is on the left axis, and the resulting speedup in comparison to the baseline serial run is shown on the right axis.

MathWorks
Figure 4: Speedup increased almost linearly with the number of processing
cores used.

The trend shows that the speedup improvement scaled almost linearly with the number of processors used from the cluster. Conversely, simulation time had an inverse relationship with the number of processors. As the number of processors increased, the overhead involved in distributing data and work across them limited the performance improvement to a less-than-perfect linear speedup. For example, with 64 cores, the speedup was approximately 42-fold, not 64-fold.

Explore Better Designs with HPC
In this case study, robustness of various designs was demonstrated by simulating multiple design scenarios. High-fidelity models and large parameter sets can create bottlenecks in the development process when using this approach. By taking advantage of a high-performance computing (HPC) environment, the simulation time was shortened by a factor of 40 as the number of processing cores was increased to 64.

To give some context to this speedup, a simulation that would have taken four days on a single processor can be completed in 2.5 hours on the cluster. These techniques can be applied with minimal engineering effort to eliminate the bottlenecks associated with running multiple simulations for design optimization, design exploration, robustness studies, Monte Carlo studies, parameter sweeps, and BER calculations.

Jason Ghidella is technical marketing manager at MathWorks, Natick, MA.

For more information:
MathWorks

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#3808