Latest News
February 20, 2024
The Graphics Processing Unit (GPU) was once specialized hardware to boost visualization, prized especially by gamers. But with the introduction of general GPU computing, championed by NVIDIA, it expanded its role. Today it still makes in-game explosions and firefights much more realistic with its real-time rendering. But it has also become a way to accelerate engineering simulation, AI and machine learning. Industry analyst Jon Peddie Research’s (JPR) report “Accelerating and Advancing Computer Aided Engineering Workflows” shines a spotlight on how the GPU is changing simulation-led engineering. The book is authored by JPR analysts Kathleen Maher and Jon Peddie, sponsored by NVIDIA, as a free download.
In Part I, the report focuses on how GPU computing changes the workstation-based CAE practices and applications. JPR notes, “The users can run larger simulations faster on their desktop workstations, and as a result, they can optimize their designs with more iterations.”
In Part 1 , JPR concludes:
- GPUs can achieve 5× or more throughput for the same cost of a CPU.
- They can achieve lower cost and power consumption for the same throughput as the CPU.
Part II expands the coverage to include High Performance Computing (HPC) and private data center and private Cloud-based resources, as well as Public cloud-based resources for running large engineering simulations from vendors such as Microsoft Azure, Rescale and ISV-specific cloud services.. The report draws from examples in software tools from Altair, Ansys, Cadence, Dassault Systèmes, and Siemens.
CAE at a Higher Scale
A growing number of simulation software programs have refined their code to take advantage of the GPU. Many have also added tools and options to draw on on-demand HPC, especially with the proliferation of servers outfitted with multiple high-performance data center GPUs. This combination allows simulation software users to consider models that were previously impractical to study and scenarios that were previously impossible to simulate.
Some, such as Ansys Discovery, are written from the ground up to take advantage of the cloud and GPU. “It was designed as a tool to enable engineers and designers to perform more upfront simulations on their local workstations. Ansys Fellow Dipankar Choudhury described Ansys’ development of Discovery as a multi-physics simulation product for design engineers,” JPR writes.
Siemens’ Simcenter Cloud HPC and STAR-CCM+ have also joined the list of ISV-specific Cloud HPC solutions leveraging GPUs. Last June, Daniele Obiso, Simcenter STAR-CCM+ Technical Product Manager for Siemens, wrote in a blog post, “The coupled solver in Simcenter STAR-CCM+ is a very robust and efficient density-based solver that has been for years the best practice for several industrial applications, amongst which: automotive vehicle external aerodynamics, aerospace aerodynamics, turbomachinery aero performance and Conjugate Heat Transfer (CHT) blade cooling … In Simcenter STAR-CCM+ 2306 all this will be available on GPU, providing you with a solution for faster turnaround time and lower costs per simulation. Moreover, we ensure CPU-equivalent flow solutions by maintaining a unified codebase, hence providing a seamless user experience and consistent results irrespective of the hardware technology used.”
CFD Benefits from GPU
Computational Fluid Dynamics (CFD) generally puts a heavy demand on the hardware, but it’s also shown to benefit greatly from GPU acceleration. JPR writes, “Traditional CPU-based solvers require lengthy processing times, sometimes spanning days for just a few seconds of real-world activity. GPU-accelerated CFD solvers have been available but faced limitations in feature parity and model size constraints. Altair's CFD Lattice Boltzmann Method (LBM) solver, ultraFluidX, has changed the landscape with its efficient GPU-based implementation, making it ideal for high-fidelity aerodynamic and aero-acoustic simulations.”
JPR singles out the introduction of NVIDIA's H100 Tensor Core GPU as the watershed moment for CFD simulation. The report says, “[It] has brought about a revolution in demanding CFD workloads. With up to 18,432 FP32 CUDA cores and various configurations, it enables efficient production-scale CFD simulations with remarkable performance improvements.”
The NVIDIA RTX 6000 Ada GPU also has 18,176 CUDA cores, making workstations equipped with the GPU viable in this space as well.
AI Gold Mine
In the last decade, NVIDIA has also been able to pivot its GPUs as the preferred processors for highly compute-intensive machine learning and AI algorithm development. In May 2023, The Wall Street Journal reported, “Nvidia Joins $1 Trillion Club, Fueled by AI’s Rise.” Since their introduction, NVIDIA RTX GPUs have included integrated Tensor Cores for accelerating AI and machine processing routines, so the same GPUs customers are currently using for general compute acceleration can also provide a significant boost for emerging AI-backed features in engineering software.
JPR sees AI as another catalyst to drive CAE. “Employing AI and machine learning in CAE not only enables process automation but also accelerates the development of simulation tools accessible to non-experts, enabling a new level of democratization in CAE. New business models are emerging to transform product development processes.”
Altair has integrated their geometric deep learning engine, Altair® physicsAI™ into user-native simulation environments like Altair® HyperWorks®. PhysicsAI leverages historical CAE and CAD data to predict outcomes for any physics up to 1000x faster than traditional solver simulation. This saves organizations time and cost as engineers can test more design variations than ever before without the limits of parametric studies or the need to build new simulation models.
According to benchmark data from Altair and NVIDIA, using an NVIDIA RTX™ A4000 GPU provided an 8-times speedup for training physicsAI models compared to an 8-core laptop CPU
This January, Ansys launched Ansys SimAI, described as “a physics-agnostic, software as a service (SaaS) application that combines the predictive accuracy of Ansys simulation with the speed of generative AI.” In its announcement, the company writes, “Instead of relying on geometric parameters to define a design, Ansys SimAI uses the shape of a design itself as the input, facilitating broader design exploration even if the structure of the shape is inconsistent across the training data. The application can boost the prediction of model performance across all design phases by 10-100X for computation-heavy projects. Customers can train the AI using previously generated Ansys or non-Ansys data. Training and predictions are hosted on a state-of-the-art cloud infrastructure to ensure that user data is secure and kept private.”
The approach can potentially catch on, prompting other CAE software vendors to introduce tools and features based on the same principle. JPR writes, “[It] is now possible to train AI on a large dataset of CAE simulations enabling evaluations beyond a component or a single design. For example, it’s now possible to evaluate how a variety of models may interact with a variety of environments. Complexity can increase exponentially.”