HPC for the Road Ahead

SGI storage and computer hardware helps increase CAE capability at Ford Motor Company.

SGI storage and computer hardware helps increase CAE capability at Ford Motor Company.

By Srinivas Kodiyalam and Stan Posey

The automotive industry today is increasingly faced with a number of complex requirements. Ford Motor Company, like most automakers, is under pressure to compress product design cycle time, lower costs, and reduce weight while improving overall performance, quality, and reliability. To do all that, it is critical to provide engineers with technology that will aid productivity and efficiency in the design and development process.

The vehicle systems development process has always involved intensive collaboration among teams using computer aided engineering (CAE) applications like computational fluid dynamics (CFD), finite element analysis (FEA), and kinematics to evaluate performance; aerodynamics; structural fatigue; noise, vibration, and harshness; and crashworthiness. Vehicle systems design also requires an understanding of interactions among all these various physical phenomena and how they might affect the behavior of the different components of the full system. Providing various teams of engineers with simultaneous access to vehicle analysis data in a heterogeneous storage area network (SAN) environment is key to effectively managing the complete mix of simulation data from the various CAE disciplines involved.

The computing demands for large-scale, multidisciplinary analysis and optimization applications are moving from megaFLOPS on to gigaFLOPS and beyond. Typically, with automotive problems that involve high-fidelity CFD and FEA analyses, there may be millions of state variables or hundreds of design variables and the analyses must be repeated many times—and with rapid job turnaround—to perform a systematic search through the design space in order to reduce the design cycle time.

Ford engineers rely on multiple CAE analysis applications  run on high-performance computing resources that help keep them on the leading edge of automotive design. (Image courtesy of Advantage CFD)

At Ford, teams of engineers progressively smooth out the bumps in the road, hush powertrain and wind noise, and make hundreds of tweaks to improve comfort, control, and safety for drivers and passengers. The primary tools for many of these engineers are the CAE software and HPC resources of the Numerically Intensive Computing Center in Dearborn, MI, known internally as the NIC. Across the street from the NIC is the Ford Research and Innovation Center, where Ford scientists and engineers use the additional HPC resources of the Research Computer Systems Department to explore further. Their work is not vehicle specific, but is both advanced and basic. It includes topics like combustion, structural responses, and materials; subjects that will influence vehicle designs across Ford product lines in the years ahead.

Both these facilities as well as an NIC-operated HPC facility in Merkenich, Germany, recently conducted evaluations of their CAE technology requirements and, as such, have made investments in SGI (Silicon Graphics, Inc.) solutions to keep Ford at the leading edge of automotive design.

A Productive HPC Infrastructure for Ford

The 1,200 engineers and scientists of the Ford Research and Innovation Center conduct various CAE simulations, for a variety of CFD and computational structural mechanics (CSM) applications.

The Department’s varied compute resources are tied together by what staff members call the most advanced and heterogeneous SAN at Ford. The Brocade FibreChannel switch-based SAN was installed and is maintained by department support staff and links SGI, Engenio, and EMC storage, and STK high-speed tape drives, with SGI, IBM, HP, Sun, and Windows-based servers.

Primary compute systems include a recently installed SGI Altix 3700, a Linux-based server powered by 176 64-bit Intel Itanium 2 processors, and an SGI Origin 3200 system. Additionally, a 16-processor Altix 350 system gives users an interactive application environment. Administrators required these systems, and other servers on the SAN, to enable access to the same data without wait time and the extra administration and storage costs of having to access or copy data across the network. To accomplish these goals, Ford invested in the SGI InfiniteStorage Shared Filesystem CXFS software, and two SGI Origin 350 systems to manage the shared file system in a highly available configuration that allows the two Altix servers and the Origin 3200 system to operate with rapid, no-file-copy data access.

‹ ‹ A Fluent simulation running on an Altix server. (Image courtesy of CEI) 

The positive results of moving to the shared file system have convinced the department to bring its 160-processor IBM Opteron cluster into the CXFS environment. It has also prompted an evaluation of doing the same for its Sun servers. Users would access data transparently across all these platforms, which in turn are supported by CXFS.

The NIC, whose compute workload grows about 50 percent every year, recently augmented its compute power with a 256-processor SGI Altix 3700 server with one terabyte of memory. This system is the NIC’s multipurpose workhorse; it provides a large distributed shared-memory environment for CAE jobs that require both shared and distributed memory parallel processing. Although the NIC operates a variety of platforms including conventional clusters, the Altix server can be called upon to run virtually any CAE software for any type of job. The NIC also supports the German Ford HPC facility where SGI has installed a 128-processor Altix 3700 server with 512GB of memory.

SGI DMF: Transparent Migration/Protection

Ford is a longtime user of the SGI Data Migration Facility (DMF), which is used for data lifecycle management. Based on age and other factors, data is moved systematically off high-speed disk storage to an STK tape silo. When a CAE engineer references data that has been migrated to tape, DMF’s hierarchy storage management technology will retrieve it from tape and restore it to disk.

Because the tape system was continuously operating at near capacity, Ford needed more storage that would provide faster access to archived data. Instead of adding tape drives, the department used DMF to implement a second tier of storage between primary disk and tape. It doubled its capacity of fiber-based tape drives and used DMF to migrate data to disk, reducing transfer times. The increased disk capacity serves as a near-line storage buffer, and the disk-cached data is ultimately migrated to tape.

Ford also used DMF to set up a disaster recovery facility in a neighboring building that is served by the SAN. Administrators use a tape silo at this remote site to back up data and assure recovery of the Laboratory’s data in the event of disaster.

SGI Enables HPC Migration

Implementing the SGI shared filesystem eliminated productivity bottlenecks and gave Ford engineers simultaneous access to data in a heterogeneous environment. Users report that the SGI Altix systems have improved CAE times when compared to previous systems that had required up to three weeks for some CAE simulations.

Ford staff and SGI engineers implemented the three-month migration project behind the scenes and virtually transparently. Beginning in January 2005, they changed the SAN, the server platforms, and the storage facilities. The SAN was upgraded and rebuilt with Brocade technology; multipathing and failure recovery were tested exhaustively. SGI engineers installed 6.1 terabytes of TP9300 FibreChannel RAID. Once the infrastructure was in place and the new SGI servers were installed on the SAN, SGI engineers used DMF to facilitate the migration of data, which was stored on tape, to the new disk arrays.

Ford engineers can now enjoy faster access to data, elimination of wait time for transfers, copies, and downloads, much faster return of results, and a new freedom to access data transparently across multiple platforms. It has all worked to improve overall productivity and greater success in CAE simulation.

Srinivas Kodiyalam and Stan Posey are HPC business development managers at SGI. Kodiyalam has a Ph.D. in mechanical engineering from the University of California, Santa Barbara and is an Associate Fellow of AIAA. Posey has a M.Sc. in mechanical engineering from the University of Tennessee, Knoxville. Send comments about this article by clicking here. Please reference “HPC, April 2006” in your message.




 

SGI Servers, Clusters, and Visualization Systems

The SGI Altix servers and clusters, and the Silicon Graphics Prism visualization platform are examples of technology that rigorously addresses compute-intensive, simulation-based design requirements and automotive industry workflow challenges. SGI Altix and Silicon Graphics Prism are the industry’s first distributed shared-memory platforms that combine SGI NUMAflex supercomputing architecture with industry-standard Intel Itanium 2 processors and the 64-bit Linux operating system.

The Altix server can run virtually any CAE software for any type of job Ford engineers require.  

The SGI Altix provides high-end scalability and throughput performance and can be deployed as a single shared-memory system or a cluster with shared memory across nodes. The SGI Altix 1330 large-node cluster provides for a cost-effective solution that can efficiently address a demanding applications and workload requirement, allowing customers to easily deploy parallel CAE applications without having to directly manage distributed memory resources. Silicon Graphics Prism combines the same system architecture, Linux OS, and Intel processors with ATI FireGL graphics accelerators to deliver unbeatable visualization capabilities and affordability for a range of automotive industry requirements.—SK & SP


Company Information

Silicon Graphics
Mountain View, CA

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#10523