The Supercomputer Software Race

The real winners of the exascale race will be the countries and organizations that invest in developing applications that can make use of all that computing power.

JamieIt’s November, and our thoughts turn to autumn leaves, pumpkin pie and supercomputers. This month, the Top500 biannual ranking of the world’s fastest, publicly known supercomputers will be updated. The list’s release will coincide with SC16, the International Conference for High Performance Computing, Networking, Storage and Analysis in Salt Lake City from Nov. 13-18.

In the last Top500 update in June, China maintained its grip on the No. 1 spot with a surprising new supercomputer, the Sunway TaihuLight, which reached 93 petaflop/s (quadrillions of calculations per second) on the LINPACK benchmark. The TaihuLight dethroned China’s Tianhe-2 supercomputer that had held the top spot for six years. It is twice as fast and three times as efficient as the Tianhe-2, which in turn is almost twice as fast as the fastest U.S. supercomputer on the list, the Titan, a Cray XK7 system installed at the Department of Energy’s Oak Ridge National Laboratory.

Unlike the Tianhe-2, which runs on Intel processors, the TaihuLight runs on China’s own ShenWei SW26010 processor with 260 cores per chip. Last year, the U.S. banned the export of some high-end Intel Xeon chips to China for use in supercomputers, but the country has been producing its own chips for many years. A ShenWei-based supercomputer first appeared on the Top500 list in 2011.

Not only has China outdone itself in terms of the fastest supercomputer, it is also now home to the largest number (167) of supercomputers on the list, besting the U.S. by two. This year marks the first time since the Top 500 rankings began 23 years ago that the U.S. cannot lay claim to the most machines on the list. So has the U.S. lost its edge? Is the supercomputing race over and done? Not even close.

Scaling the Supercomputer Software Summit

In 2014, Oak Ridge National Laboratory (ORNL) announced it would develop the Summit supercomputer using IBM Power9 CPUs, NVIDIA Volta GPUs and Mellanox EDR InfiniBand interconnects. ORNL says Summit will deliver more than five times the computational performance of Titan when it arrives at the Oak Ridge Leadership Computing Facility in 2017. It is expected to be ready for use in 2018, giving the U.S. a supercomputer that would double the current speed of the Sunway TaihuLight.

But what good will Summit’s 3,400 nodes and 200 petaflops be if there is no software ready to use them? Hardware specs and LINPACK scores are great for comparisons, but they don’t show who is best using supercomputers to solve real-world problems.

“The strength of the U.S. program lies not just in hardware capability, but also in the ability to develop software that harnesses high-performance computing for real-world scientific and industrial applications,” the DOE said in a statement after the Top500 list was released in June. I tend to agree.

In addition to using supercomputers for energy and national security concerns, the DOE says its supercomputers are being used by industry to achieve the most practical results. These include Pratt and Whitney improving the fuel efficiency of its Pure Power turbine engines; Boeing studying the flow of debris to improve the safety of a thrust reverser for its new 787 Dreamliner; GM accelerating research on thermoelectric materials to help increase vehicle fuel efficiency; and GE improving the efficiency of its turbines for electricity generation to name a few.

Investing in Exascale

The U.S., China, France and Japan all have plans to achieve exascale computing—systems capable of a billion billion calculations per second—by 2020 and 2023. This would be a computing milestone, and the U.S. is bringing up the rear with its plans to hit exascale coming to fruition in 2023.

But again, that’s not the whole story. It’s not like current simulation software can just be run on future exascale architectures. The real winners of the exascale race will be the countries and organizations that invest in developing applications that can make use of all that computing power. That race has already begun.

In September, the DOE’s Exascale Computing Project (ECP) announced its first round of funding. It awarded $39.8 million to 15 application development proposals for full funding and seven proposals for seed funding. The fully funded proposals include titles like “Transforming Additive Manufacturing through Exascale Simulation,” “Exascale Deep Learning and Simulation Enabled Precision Medicine for Cancer,” and “Transforming Combustion Science and Technology with Exascale Simulations.” Those are more exciting than any number of nodes and flops, and it’s only the beginning.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Jamie Gooch's avatar
Jamie Gooch

Jamie Gooch is the former editorial director of Digital Engineering.

      Follow DE
#15861