Consider CAE in the Cloud

A large-scale experiment finds why and when the Cloud might make sense.

Fig. 3: Airflow through an offshore platform. Streamlines are colored by air velocity. Thermal plumes from power generation turbines are shown. In this case, these plumes do not affect the approach path of the helicopter.


By Wolfgang Gentzsch and Burak Yenier

Figure1 UberCloud Fig. 1: Computational domain including the iliac veins (inflow), renal veins (inflow), and the inferior vena cava (IVC). The pressure on the surface of the vessels is shown; a slice, colored by velocity, down the center of the IVC is also shown.

Editor’s Note: This is the third part of Desktop Engineering’s series on high-performance computing options. Read the other articles in the series here.

Cost savings, shorter time to market, better quality, less product failures—the benefits that engineers and scientists can expect from using technical computing in their research, design and development processes can be huge. But relatively few scientists and manufacturers use servers when designing and developing their products on computers; the vast majority still performs virtual prototyping or large-scale data modeling on workstations or laptops.

Many of these professionals face problems stemming from the lack of performance of their machines. More accurate geometry or physics, for instance, may require more memory than a desktop can accommodate. System vendors have developed a complete set of products, solutions and services for high-performance computing (HPC), and buying an HPC server for a small- or medium-sized business is no longer out of reach.

Another option today is to use a cloud solution that allows engineers and scientists to keep using their workstation for daily design and development work, and to “burst” larger, more complex jobs into the cloud when needed. Thus, users have access to quasi-infinite computing resources that offer higher quality results. A cloud solution helps reduce capital expenditure, offers greater business agility by dynamically scaling resources up and down as needed, and is only paid for when used.

The UberCloud Experiment

Since July 2012, the UberCloud Experiment has attracted 1,500 organizations from 72 countries. It includes 152 teams in computational fluid dynamics (CFD), finite element method (FEM), biology and other domains, and tracked their experiences and lessons learned via a compedium of case studies. UberCloud TechTalk provides educational lectures for the community. And the UberCloud Exhibit offers a cloud services catalog where community members can exhibit their cloud-related services or select the services they want to use for their team experiment or for their daily work.

Figure2 UberCloud Fig. 2: Sample results: (a) streamlines, colored by velocity and (b) pressure on the surface of the vessels.

Intel sponsored the first compendium in 2013, with 25 CAE case studies. In June, the second Compendium of UberCloud case studies was published, sponsored by Intel and Desktop Engineering. It can be downloaded for free here.

The UberCloud Experimentprovides a platform for scientists and engineers to explore, learn and understand the end-to-end process of accessing and using cloud resources, and to identify and resolve the roadblocks. End-users, software providers, resource providers and computing experts collaborate in teams to jointly solve the end-user’s application in the cloud.

Let’s start by defining what roles each stakeholder plays to make service-based HPC in the cloud come together:

  • End-user: A typical example is a small or medium-sized manufacturer in the process of designing, prototyping and developing its next-generation product.
  • Application software provider: These are software owners of all stripes, including independent software vendors (ISVs), public domain software organizations and individual developers.
  • Resource provider: This pertains to anyone who owns technical computing resources networked to the outside world. A classic HPC center would fall into this category, as would a standard datacenter used to handle batch jobs, or a cluster-owning commercial entity that is willing to offer up cycles to run non-competitive workloads during periods of low CPU-utilization.
  • Computing experts: This group includes individuals and companies with technical computing expertise in areas like cluster management and software porting. These experts work as team leaders, with end-users, computer centers and software providers to help glue the pieces together.
For example, suppose the end-user is in need of additional compute resources to increase the quality of a product design or to speed up a product design cycle. Perhaps the goal is to simulate more-sophisticated geometries or physics or to run many more simulations for a higher quality result. That suggests a specific software stack, domain expertise and even hardware configuration. The general idea is to look at the end-user’s tasks and software, then select the appropriate resources and expertise that match certain requirements.

As a glimpse into the practical use cases, below is a look at four CAE cloud projects out of the 152 UberCloud experiments. More details and 17 full case studies can be found in the second UberCloud Compendium.

Team 62: Cardiovascular Medical Device Simulations in the Cloud

Team members: End-user Mike Singer is the founder and president of Landrew Enterprises. Software and resource provider Sanjay Choudhry is the CTO at Ciespace Corp. Oleh Khoma, the HPC expert, is head of ELEKS’ HPC unit.

The project investigated flow through a patient-specific blood vessel, and represents a typical CFD use case for cardiovascular flow. The patient-specific geometry is extracted from CT image data obtained during a normal medical imaging exam. The triangulated surface mesh geometry contains the inferior vena cava (IVC), the right and left iliac veins, and the right and left renal veins (see Fig. 1).

Cloud resources provide a mechanism to address the computing requirements of cardiovascular simulations. Specifically, the use of cloud-based CFD alleviates the need for large, in-house clusters. In addition, cloud resources may enable the timely execution of parameter and sensitivity studies, which are important for biofluids simulations that often contain uncertain or variable model parameters. Hence, the purpose of this experiment was to explore the use of cloud-based simulation solutions for enabling cardiovascular simulations.

Despite several obstacles, the team accomplished its goal of running a patient-specific cardiovascular flow simulation in the cloud (see Fig. 2). Also, the user experience and the results of this experiment demonstrate the potential success of further cloud-based cardiovascular flow simulations.

Team 99: North Sea Asset Life Extension, Assessing Impact on Helicopter Operations

Team members: End-user was Dan Hamilton from Atkins Energy; software provider James Britton, CD-adapco; resource provider Jerry Dixon, OCF with its cloud service enCORE HPC; and team mentor Dennis Nagy from BeyondCAE.

The team tested the feasibility of using HPC-as-a-Service (HPCaaS) for the simulation of airflow over an offshore platform using STAR-CCM+ from CD-adapco, to determine the change in conditions within the helideck landing area as a result of geometrical changes stemming from a life extension project on an existing North Sea asset.

Figure3 UberCloud Fig. 3: Airflow through an offshore platform. Streamlines are colored by air velocity. Thermal plumes from power generation turbines are shown. In this case, these plumes do not affect the approach path of the helicopter.

High-temperature sources, such as the exhaust from power generation equipment, can result in significant variations in temperature over the topsides depending on atmospheric wind conditions; upwind structures generate downwind turbulence, for example. High variations in temperature and high turbulence can result in increased pilot workload for helicopter operations on the platform.

For this project, Atkins used CFD to assess the expected range of wind and operational conditions at the platform. A complete study requires a large number of CFD simulations to be undertaken. This is typically done using in-house hardware; however, the flexibility of HPCaaS was appealing as a potential overflow solution.

For OCF, the availability of the STAR-CCM+ “Power-on-Demand” licensing was the ideal fit for the enCORE service. Once the installation was debugged, the user experience of using STAR-CCM+ in batch on enCORE was identical to that on in-house hardware.

Because the simulation files can be large, enCORE’s policy of not charging for bandwidth usage is appealing. Having a resource like enCORE allows users to bid for and propose work requiring computational resources that exceed what’s available in-house.

Team 118: Coupling In-house FE Code with ANSYS Fluent CFD

Team members: End user was Hubert Dengg from Rolls-Royce Deutschland, software providers were Wim Slagter and René Kapa from ANSYS, resource providers and team experts were Thomas Gropp and Alexander Heine from CPU 24/7, and Marius Swoboda from Rolls-Royce Deutschland acted as HPC/CAE expert.

In the present test case, a jet engine high-pressure compressor assembly was the subject of a transient aerothermal analysis using FEA/CFD coupling techniques. Coupling is achieved through an iterative loop, with the smooth exchange of information between the FEA and CFD simulations at each time step, ensuring consistency of temperature and heat flux on the coupled interfaces between the metal and the fluid domains. The aim of the HPC experiment was to link ANSYS Fluent with an in-house FEA code. This was done by extracting heat flux profiles from the Fluent CFD model and applying them to the FE model. The FE model provides metal temperatures in the solid domain.

This conjugate heat transfer process is very consuming in terms of computing power, especially when 3D CFD models with more than 10 million cells are required. As a consequence, it was expected that using cloud resources would have a beneficial effect regarding computing time.

The computation was performed on the 32 cores of two nodes with dual Intel Xeon processors. The calculation was done in cycles in which the FE code and Fluent CFD alternated, exchanging their results.

Outsourcing of the computational workload to an external cluster allowed the end user to distribute computing power in an efficient way especially when the in-house computing resources were already at their limit. Bigger models usually give more detailed insights into the physical behavior of the system. In addition, the end user benefited from the HPC provider’s knowledge of how to set up a cluster, run applications in parallel based on message-passing interface (MPI), create a host file, handle licenses, and prepare everything needed for turnkey access to the cluster.

Table 1: Comparison of Desktop, Cloud and HPC Solving Options
Simulation Solving ApproachApproximate Time to CompleteInvestment Required
Local Desktop Machine800 hours (1 month)Engineering Workstation + Simulation Software License
Local Desktop Machine + Cloud Computing24 hours (1 day)Engineering Workstation + Simulation Software License + $1,200 Cloud Compute Fee
Local Desktop Machine + Private HPC Cluster + Multiple Solver Licenses24 hours (1 day)Engineering Workstation + Simulation Software License + 30 Node Compute Cluster + 30 Simulation Solver Licenses

Team 142: Virtual Testing of Severe Service Control Valve

Team members: End user was Mark A. Lobo, P.E.,  from Lobo Engineering, P.L.C. Autodesk provided Simulation CFD 360 (SimCFD) and the supporting cloud infrastructure. The HPC/CAE application experts were Jon den Hartog and Heath Houghton from Autodesk.

For a valve to be properly applied in fluid management systems, flow control valve specifications include performance ratings. Control systems sort out input parameters, disturbances and specifications of each piping system component to react and produce a desired output. System response is chiefly a function of the accuracy of control valves that respond to signals from the control system. Valve performance ratings provide information to the system designer, then, that can be used to optimize control system response.

The premise of this project was not only to explore virtual valve testing, but also to evaluate the practical and efficient use of CFD by the non-specialist design engineer. As a benchmark, the end user had no prior experience with the Autodesk software when the project initiated, and no formal training in the software. He depended on the included tutorials, help utility and documentation to produce good results and good data.

One of the benefits for the end-user was that cloud computing enabled accessing a large amount of computing power in a cost-effective way. Rather than owning the hardware and software licenses, engineers can pay for what they need when they need it, with no substantial upfront investment.

In this project, more than 200 simulations were run in the cloud. Given the runtimes involved and allowing for data download upon completion of the runs, it is possible for all of these simulations to be solved within a day. For an engineer with one simulation license on a single workstation, this would have required 800 hours (approximately 30 days) to complete if the simulations were running nonstop one after another. Table 1 compares the approximate time and investment that would be required for various solving approaches.

Wolfgang Gentzsch is an industry executive consultant for high performance, technical, and cloud computing. Burak Yenier is an expert in the development and management of large-scale, high availability systems, and in many aspects of the cloud delivery model. Both are founders of UberCloud. Contact them via theubercloud.com.

More Info

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


#12815