Intel Xeon Phi Enters Parallel Quest for HPC Dominance

Ever seen three thoroughbreds heading for the same finishing line, but running on different tracks? Watch AMD, Intel, and NVIDIA going after the high-performance computing (HPC) market. This week, Intel entered an official name into the race, Intel Xeon Phi. The first product to feature Intel’s many integrated core (MIC) architecture, Phi is expected to ship with more than 50 cores.

“Made with Intel’s innovative 22nm, 3-D tri-gate transistors, the Intel Xeon Phi coprocessor, available in a PCIe form factor, contains more than 50 cores and a minimum of 8GB of GDDR5 memory. It also features 512b wide SIMD support that improves performance by enabling multiple data elements to be processed with a single instruction,” according to Intel.

By Intel’s own classification, Phi is a coprocessor, not a central processor. Just as AMD and NVIDIA’s GPUs are graphics coprocessors, Phi, too, is a coprocessor. In the case of Phi, however, it’s not restricted to graphics tasks only. It’s a general purpose coprocessor, aimed at handling highly parallel computing jobs (simulation and rendering are the top two types for engineers).

Though originally developed as graphics coprocessors, AMD and NVIDIA’s GPUs are also well on the way to tackle general purpose computing jobs. In the case of AMD, the path to parallelism is its heterogeneous system architecture (HSA). It’s a mouthful, but basically, it’s a computing environment that fuses CPU’s and GPU’s functions into a single device (the root of AMD Fusion, as the product line is called). The HSA Foundation, a newly launched nonprofit industry group, describes HSA as “parallel computation utilizing CPU, GPU and other programmable and fixed function devices.”

Taking a slightly different path, NVIDIA proposes CUDA as the programming environment to write general purpose applications (not just graphics applications) that take advantage of the GPU’s parallel processing power. This will allow computing intense applications, like simulation, to run on GPU clusters.

Both AMD and NVIDIA must have realized asking programmers to master a whole new programming language to write parallel-processing code for their products would put a roadblock on the race to HPC. So both are working hard to make their respective architecture—HSA and CUDA—easier to work with for programmers who use common programming languages, like C, C++, JAVA, Fortran, and Python.

In that respect, Intel’s Phi may have an advantage over its rivals. Intel’s MIC is basically a small cluster of CPU cores in a single chip. Therefore, programmers are expected to be able to write parallel code for Phi using standard programming languages compatible with CPUs.

The first test cluster built with Xeon processors and Phi coprocessor is already up and running, the company announced, and is delivering 118 TFLOPs of performance. The real test of Phi’s parallel processing prowess will come when Stampede, a supercomputer located in the Texas  Advanced Computing Center (TACC) goes online in early 2013. Stampede is expected to run at 10 Petaflop speed, powered by thousands of Intel’s MIC coprocessors.

Processor makers—both CPU and GPU vendors—see HPC as a fertile territory as the growth of computer-driven simulation in scientific, medical imaging, engineering, and visualization fields is expected to drive demand for parallel-processing clusters. But developing parallel-processing applications that can take advantage of the new crop of hardware proves a challenge. One way to lower the barrier of entry is to refine the programming environment—to allow programmers to communicate with the processors in common programming languages.

For more, read “GPU is Not Just for Ninja Programmer,” June 13, 2012; and “Intel Announces Intel Xeon Phi Brand,” June 18, 2012.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Kenneth Wong's avatar
Kenneth Wong

Kenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at digitaleng.news/facebook.

      Follow DE
#19756