Large-Scale Models Take A Bite Out of Engineering Productivity
Hardware vendors and design tool makers are stepping up with solutions aimed at optimizing workstation performance when dealing with large-scale modeling.
March 1, 2020
Engineers are building larger multidisciplinary models to flex their innovation muscle. Although these complex, high-fidelity 3D models are crucial to bringing sophisticated products to market faster, they can wreak havoc on workflows and productivity by grinding workstation performance to a halt.
Time-to-market pressures, surging consumer demand for highly personalized and intelligent offerings, as well as software and electronics content commanding a larger portion of product real estate are just some of the factors driving the need for larger and more complex 3D models. An increasing embrace of systems modeling concepts along with pervasive simulation use, including new modalities and more widespread adoption among a broader audience, is also boosting model fidelity to a point where it can be taxing for older workstations to maintain effective performance.
“Customers are trying to add more value for their customers and many are doing that by moving to system-level design,” says Jon den Hartog, director of product management at Autodesk. “That means the scope of what they’re modeling increases as a result. At the same [time], the scope is increasing because they are trying to create a more accurate digital representation of what the design is before they build it.”
This means engineering teams newly empowered by multidisciplinary simulation and systems engineering workflows are also taking a productivity hit unless they recalibrate what constitutes an optimal workstation configuration.
“Without enough horsepower, it affects their quality of life working with a massive model,” den Hartog says. “Every change will require a significant amount of time to calculate and propagate the math throughout the model.”
Hardware Advances Drive Model Performance
Just as 3D models are gaining in complexity, advances on the hardware front are continuing to help engineering organizations solve the large-scale model performance problem. New solid-state storage options, CPUs with faster clock speeds and more powerful graphics processing units (GPUs) are just some of the hardware advances being integrated into next-generation engineering workstations to help with large-scale 3D model management and processing.
GPUs, in particular, are a technology bright spot that help boost workstation performance for complex CAD and CAE models. NVIDIA has been leading the charge here, backstopped more recently by activity with silicon leader AMD.
NVIDIA has a full family of GPUs, but its Quadro RTX family (powered by the NVIDIA Turing architecture) sets a new bar. The Quadro RTX line integrates RT Cores, accelerator units dedicated to performing ray tracing operations with high-level efficiency, along with high-end memory and artificial intelligence capabilities.
The architecture and core combination is designed to optimize performance of sophisticated applications like virtual reality, ray tracing, photorealistic rendering and simulation, all of which require massive compute horsepower and real-time performance, according to Andrew Rink, NVIDIA’s head of marketing strategy.
The choice of RTX platform depends on the use case—the RTX Quadro 4000 hits the sweet spot for engineers immersed in photorealistic ray tracing applications, while the higher-end Quadro RTX 8000, which is equipped with 48GB of GPU memory and ability to pair two GPUs to double system memory and performance, is the highest end option.
“It really depends on the type of workflow you have and whether you’re working on simple parts or complex large assemblies,” Rink explains. GPU horsepower can also be targeted to help huge models load faster. “If it takes 10 seconds for a rotation to happen because the system is sputtering as you are turning a model, that is pretty frustrating and is a hit on productivity.”
For its part, AMD is boosting CAD and simulation performance via multitasking and multithreading capabilities that come via more CPU cores. AMD is also busy advancing the range of its GPU line.
Last November, the firm announced the ADM Radeon Pro W5700, a professional PC workstation graphics card that features the high-performance, energy-efficient AMD Radeon DNA (RDNA) architecture and state-of-the-art GDDR6 memory specifically tuned for handling large models and datasets. The card is among the first to support high-bandwidth PCIe 4.0 technology, which doubles the transfer speeds between the CPU and peripherals card attached, including GPUs.
Thanks to accelerated CPU/GPU multitasking functionality, the AMD Radeon Pro W5700 delivers up to 5.6x the application workflow performance with GPU loads compared to the competition in the SPECviewperf 13 benchmark, according to AMD. Certification with the leading design, manufacturing and architecture, engineering and construction (AEC) applications and new capabilities to improve virtual reality (VR) workflows are among the new GPU’s other notable enhancements.
“The AMD Radeon Pro W5700 is the perfect graphics card for large assemblies as it’s designed to deliver real-time visualization and multitasking capabilities,” notes Antoine Reymond, senior manager, ISV alliances for AMD. “You can be designing an engine block and doing computing in the background, and the visualization and computing can use the GPU power at the same time without degrading per performance.”
Although GPUs are certainly a crucial tool for boosting large-scale model performance, they aren’t the only hardware solution and they aren’t perfect—sometimes users can have trouble getting data into the cache for the first time, notes Ken Versprille, executive consultant at CIMdata.
“Typically, the big complaint we hear from CAD users is that it takes so long to initiate the processes and activate the assembler and they struggle with that, even with newer GPUs,” he says.
Beyond GPUs, the right choice of CPU, depending on the application, along with solid-state storage and memory options also play a big role in optimizing performance and addressing large-model complexity.
“GPUs are helping to revolutionize the engineering workflow, but you have to look at where the bottlenecks are in the system today,” says Chris Ramirez, strategist for engineering, manufacturing and AEC, at Dell. “You have to look at the system in a holistic fashion—simply removing the bottleneck from the GPU when you still have problems at the storage subsystem level means even though you spent a lot of money, you might not have a workstation that was faster than before.”
To help engineers better understand how and where their design software might stress their workstation, Dell offers Dell Precision Optimizer, a tool that provides a real-time view of system performance. Besides flagging potential system bottlenecks, Dell Precision Optimizer can tweak hardware so the Precision workstation model runs the slated application faster than it would with default settings.
“You can get two times faster performance on the same workstation running [Dell Precision Optimizer],” Ramirez says. “It can recognize that an app needs more GPU power and not as much CPU power so it will throttle down the CPU, transfer more power to the GPU, turn up the fans for cooling and tweak memory to allow the system to run faster. It’s real-time optimization for workloads.”
Software Vendors Find their Own Rx
Just as the hardware vendors are doing their part to advance GPUs and storage, CAD, simulation and design tool providers are also working to identify and re-architect areas in their code to enable the software to truly leverage GPUs and other optimization advancements.
“The software has to be re-architected to take advantage [of GPUs] … and that’s a huge undertaking,” explains CIMdata’s Versprille. “The vendors are tacking on certain capabilities to improve performance, but they can’t afford to redesign their complete architecture to support multiple threads.”
Nevertheless, software providers are making headway addressing the performance and visualization challenges related to large-scale models. In addition to expanded GPU support, vendors are introducing data management features that allow engineering teams to holistically work on large models and render complete assemblies by limiting what must be loaded into memory or directing more processing work to the GPUs.
At Dassault Systèmes, for example, the SolidWorks team released beta capabilities in SolidWorks 2019 dubbed “RenderPipeline Project, [which is] aimed at delivering a rendering engine that is in line with modern programming paradigms and that can make complete utilization of a GPU,” explains Siddharth Palaniappan, senior development manager, graphics applications at Dassault Systèmes.
“We store our user’s model data as well as many other associated data on the GPU and do more work on the GPU than on the CPU for rendering,” he explains. “This results in performance scaling across GPUs so if you have a high-end graphics card with a greater number of GPU cores, you get better performance. We also work very closely with vendors like AMD and NVIDIA to make sure their graphics drivers work nicely with SolidWorks.”
SolidWorks is also focusing on accelerating model load times in addition to improving large model rendering performance. The team created a proof of concept for the February 3DEXPERIENCE World in Nashville, TN, to showcase how large SolidWorks models could be loaded using multiple cores and threads.
Although it’s common for current-generation machines to feature quad-core or hex-core CPUs, AMD’s new Ryzen Threadripper 3970X has 32 cores (64 threads), which makes handling and loading large assemblies much easier, Palaniappan says.
“More importantly, we were able to showcase that our model load time performance scaled with the number of CPU cores,” he explains. “The machine was coupled with a Radeon Pro W5700 GPU, which allowed us to showcase buttery smooth [virtual reality] experiences for large assemblies. This is an excellent use case for large design reviews or modeling in resolved mode in SolidWorks.”
At Autodesk, there has been a significant uptick these last few years in rewriting portions of code to better take advantage of parallel processing. For example, if an engineer makes a model change in Inventor, the system can divvy the calculations into multiple parts that are computed independently and brought back together at the end, den Hartog explains. Ray tracing operations are another use case where code parallelization can deliver extreme performance increases, he says.
Autodesk is also investing in real-world model testing to understand workflows and real-world bottlenecks and a feature called adaptive graphics. The latter is a specific capability that detects if the software refresh frame rate dips below a certain threshold when working with huge models; if so, it adjusts by not rendering the some of the smaller parts while rotating a model.
“As soon as the operation stops, the software redraws everything—the goal is to cut out the graphics lag time,” den Hartog says.
With software vendors innovating new ways to leverage GPU horsepower, and hardware makers continuously pushing for new advancements, there are definite signs that the graphics lag for large-model performance is on the wane.
But it’s up to users to know their software and fully leverage the capabilities to fully put the problem to bed. “Vendors are putting in a lot of bells and whistles to determine what you’re working on so that only the graphics data required is fully loaded, which ensures much faster performance,” says CIMdata’s Versprille. “Engineers need to explore their options in this area and try to understand them as best they can.”
More AMD Coverage
More Autodesk Coverage
More CIMdata Coverage
More Dassault Systèmes Coverage
More Dell Coverage
About the Author
Beth Stackpole is a contributing editor to Digital Engineering. Send e-mail about this article to [email protected].Follow DE