InfiniPath Interconnect Sets Low Latency Performance Records for HPC

MPI benchmarks show PathScale's new InfiniPath clusters exploit multiprocessor nodes and dual-core processors.

MPI benchmarks show PathScale's new InfiniPath clusters exploit multiprocessor nodes and dual-core processors.

By DE Editors

New benchmark results show InfiniPath Interconnect for InfiniBand from PathScale(Mountain View, CA) provides the lowest latency Linux clusterinterconnect across a broad spectrum of cluster-specific benchmarks formessage passing (MPI) and TCP/IP applications.

The InfiniPath HTXT Adapter plugs into standard HyperTransporttechnology-based HTX slots on AMD Opteron servers. Optimized forcommunications-sensitive applications, InfiniPath achieved an MPIlatency of 1.32 microseconds (as measured by the standard MPI"ping-pong” benchmark), n1/2 message size of 385 bytes and TCP/IPlatency of 6.7 microseconds. This represents performance advantagesthat are 50 to 200 percent better than similar interconnect products.InfiniPath also produced industry-leading benchmarks on morecomprehensive metrics that predict how real applications will perform.

“When evaluating interconnect performance for HPC applications, it isessential to go beyond the simplistic zero-byte latency and peakstreaming bandwidth benchmarks,” said Art Goldberg, COO of PathScale.“InfiniPath delivers the industry’s best performance on simple MPIbenchmarks and provides dramatically better results on more meaningfulinterconnect metrics….”

PathScale InfiniPath exploits multi-processor nodes and dual-coreprocessors to deliver greater effective bandwidth as additional CPUsare added. Any of the existing serial offload HCA designs causemessages to stack up when multiple processors try to access theadapter. By contrast, the unique messaging parallelization capabilitiesof InfiniPath enable multiple processors or cores to send messagessimultaneously, maintaining constant latency while dramaticallyimproving small message capacity and further reducing the n1/2 messagesize and substantially increasing effective bandwidth.

PathScale has published a white paper that includes a technicalanalysis of several application benchmarks that compare the newInfiniPath interconnect with competitive interconnects. This PathScalewhite paper can be downloaded from pathscale.com/whitepapers.html.Potential customers and ISVs are also invited to remotely test theirown MPI and TCP/IP applications on a fully-integrated InfiniPathcluster at PathScale’s benchmark center in Mountain View, CA.

For more information, visit pathscale.com
 

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#11321