HPC Handbook: Opening Up HPC

Big names in hardware and software join together under the OpenHPC banner to democratize supercomputing.

Editor’s Note: This is an excerpt of Chapter 9 from The Design Engineer’s High-Performance Computing Handbook. Download the complete version here.

Initially, high-performance computing (HPC) was exclusive to research institutions, government agencies, and large enterprises. It was the arsenal for climate analysis, genomic sequencing, and the search for intelligent life in the universe. Most small and medium businesses (SMBs), however, didn’t engage in such ambitious projects. They didn’t need the aggregated firepower from a server cluster. Nor did they routinely generate, store and examine terabytes of data. Professional workstations with robust processors and generous memory proved more than adequate for most operations. For them, HPC ownership and the associated IT responsibilities were more a burden than an advantage. If the need for HPC did arise due to special circumstances, they might approach a larger organization or a research facility for access to a cluster, perhaps for a nominal fee.

But the Internet of Things (IoT) is ushering in a new era. With the rapid growth of connected devices, Big Data is no longer a headache confined to big companies. It’s now everyone’s challenge.

With the capacity for massively parallel computation jobs, HPC is a natural fit for the Big Data problem. But implementation of HPC still proves a barrier to entry, even for those who’re willing to bite the bullet and procure the necessary hardware. HPC management tools—especially the software components—were initially developed for large enterprises with big budgets and long lead times. SMBs need inexpensive—preferably free—tools that can get them up and running in a short timeframe. This has recently produced a flurry of HPC-related activities in the open source community. One of them is the Open HPC Collaborative Project, described as an initiative “to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters.”

Unifying Open Source HPC Stakeholders

On November 12, 2015, just days before the Supercomputing 2015 (SC15) Conference, OpenHPC came online. The Linux Foundation announced, “For more than four decades, HPC has been used by universities and research centers for large-scale modeling and calculations required in meteorology, astronomy, engineering and nuclear physics, data sciences, among others. With unique application demands and parallel runtime requirements, software remains one of the biggest challenges for HPC user adoption … OpenHPC will provide a new, open source framework for HPC environments. This will consist of upstream project components, tools, and interconnections to enable the software stack.”

Participants in the project pledge to:

• Create a stable environment for testing and validation

• Reduce costs

• Provide a robust and diverse open source software stack

• Develop a flexible framework for configuration

In the open source community, codes, plug-ins and software tools tend to develop and mature organically over time based on input and contributions from users. OpenHPC founding members expect their collaboration will avoid certain conflicts and duplication of efforts.

Open Source Momentum and Commercial Prospects

Linux, a champion of the open source movement, might have kicked off the project, but the current member roster features a long list of commercial software developers and systems providers, most notably Intel.

“We’re entering a new era in which supercomputing is being transformed from a tool for a specific problem to a general tool for many,”said Charlie Wuischpard, vice president and general manager of HPC Platform Group at Intel in a press release. “System-level innovations in processing, memory, software and fabric technologies are enabling system capabilities to be designed and optimized for different usages, from traditional HPC to the emerging world of big data analytics and everything in between.”

Intel-supported versions of the open source HPC system software stack are expected to be available next year. Simulation software makers ANSYS, Altair, MSC Software, and Dassault Systemes are also participating in OpenHPC. So are leading hardware makers Dell, HP, Lenovo, and Fujitsu. Research giants Oak Ridge National Laboratory, Lawrence Livermore National Laboratory and Dandiao National Laboratories round up the list.

Wim Slagter, director of HPC and cloud marketing at ANSYS, said, “While our engineering simulation software is optimized for HPC performance, many of our engineering customers are still slow to adopt HPC. The OpenHPC initiative enables them to reduce risk and to save valuable time with specifying, deploying and managing HPC systems.”

The appeal of open source HPC is not lost on these commercial vendors. If open source lowers the barrier of entry for HPC, simulation software vendors stand to gain from the increase use of HPC-driven simulation. Large-scale simulation of complex systems is the a standard feature of aerospace and automotive design workflow; however, if HPC were easier to procure and deploy, other industries, such as consumer electronics and medical equipment, may follow suit, thus expanding the simulation software market. Hardware makers like Dell, HP, Lenovo, and Fujitsu have also made considerable efforts to bolster their HPC offerings, all in anticipation of HPC uptake beyond the top tier.

Jim Ganthier, Dell’s VP and general manager of engineered solutions, cloud and HPC, said, “Community investment in open source frameworks and open standards is the right way to ensure the right capabilities are available to a growing HPC community. The new OpenHPC effort will greatly accelerate HPC adoption, productive usage, and innovation.”

Scalable HPC

The cornerstone of Intel’s strategy to capture the HPC market is its Scalable System Framework (SSF). Intel writes, “HPC has reached an inflection point with the convergence of traditional HPC and the emerging world of Big Data analytics. Intel’s SSF enables an unprecedented level of system balance, performance, and scalability necessary to meet the demands of both compute- and data-intensive workloads.”

The fabric component is Intel’s Omni-Path Architecture (OPA), described as “an end-to-end fabric solution that cost-effectively improves the performance of HPC applications for entry level to large-scale HPC clusters”in Intel’s announcement. Intel writes, “Current standards-based high performance fabrics, such as InfiniBand, were not originally designed for HPC, resulting in performance and scaling weaknesses that are currently impeding the path to Exascale computing. Intel Omni-Path Architecture is being designed specifically to address these issues.”

Intel estimates that “OPA’s 48-port switch enables up to 26% more servers than InfiniBand Enhanced Data Rate within the same budget and up to 60% lower power consumption for a more efficient switch and system infrastructure.”

Intel OPA is currently deployed at the Texas Advanced Computing Center and the Pittsburgh Supercomputer Center. In the first quarter of this year, Colfax, Cray, Dell, Fujitsu, Hitachi, Lenovo, NEC, SGI, Sugon, Supermicro and other system providers are expected to begin shipping OPA-based HPC products.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#14756