Four Steps to Parallelization

Eliminating coding complexities results in a boost in productivity.

Eliminating coding complexities results in a boost in productivity.

By Ilya Mirman

Using parallel clusters and servers to crunch huge computational problems is often a zero-sum gain for engineers because the computational speed gained is offset by the time spent on the complex programming necessary to run parallel code. Engineers can quickly crank out new algorithms to test using very high-level language (VHLL) tools such as Python or MATLAB, only to spend months or even years reprogramming the code to run it on parallel systems; not to mention waiting hours and days for their programs to run in batch mode.
 


The Star-P platform from Interactive Supercomputing connects science and engineering desktop tools — such as MATLAB, Python, and Mathematica — with high-performance computers.

 

But eliminate the coding complexities and engineers would gain a tremendous boost in productivity. The following is a four-step guide to getting there.

1: Desktop Tools on Parallel systems
The first move would enable engineers to use their preferred desktop tools — such as MATLAB, Python, Mathematica, Excel or R (language for statistical computing) interactively on parallel systems. The solution must hide the complexities of message passing interface (MPI) programming, the common low-level method for programming parallel computers. Few engineers are MPI experts. While MPI enables skilled users to achieve top computational performance, it’s difficult to do. Some recent experimental hybrids that integrate higher-level languages with message passing show some promise, but they still require users to be versed in MPI.
 

This genetic algorithm created in MATLAB is easily extended to a parallel environment with Star-P using a handful of data tags and commands.
Recently, both commercial software vendors and the open source community have introduced programming tools that extend these applications to parallel systems. These include Interactive Supercomputing’s Star-P, GridMathematica from Wolfram Research, The Distributed Computing Toolbox from The MathWorks, and parallel extensions to the open source language Python, to name a few. While these tools vary widely in terms of algorithm coverage and ease of use, they all represent a great leap forward for engineers who need high-performance computing (HPC).

2: Optimizing Serial Code
The next step is to make it easier to optimize existing engineering models and algorithms for parallel computers. A well-written serial program doesn’t necessarily make for a good parallel program. While achieving 100% parallelization of a model or algorithm every time is unrealistic, several parallel optimization breakthroughs are now within reach for desktop tool users. Vectorizing compilers, object-oriented techniques such as polymorphic overloading, and new parsing techniques can help automate the parallelization of some segments of code while guiding users in semi-automatically transforming the codes for other segments.

The tools with which an engineer designs and creates prototypes must also interface to popular debugging, profiling, and monitoring tools, such as TotalView, to streamline development and improve application performance. These tools let engineers explore their models and algorithms interactively. They can quickly zero in on time-consuming areas, as well as determine what portions of the code can have the biggest impact on performance.
 

The Star-P system is based on client-server architecture and enables desktop users to transparently access a parallel server’s multiple processors, large distributed memory, and hardware accelerators.

3: Incorporating Libraries and Solvers
Extending parallel capabilities to commonly used components like numerical libraries, toolboxes, and solvers is the third step. Engineers should be able to integrate components from independent software vendors and the public domain, avoiding needless duplication of hard-to-engineer infrastructure elements of the new parallel application. These elements include client-server infrastructure and management of users and sessions; large-scale memory, processor, file allocation, and scheduling, etc.

And the palette of components would be rich. For example, open-source toolboxes such as SPM2 and Field-II for imaging typically run in a serial fashion, but could be plugged in to run in a task-parallel fashion. Other choices include open-source parallel solvers such as the popular PETSc from Argonne National Labs and Trilinos from Sandia Labs; commercial numerical libraries from vendors such as Visual Numerics, NAG (Numerical Algorithms Group), and more application- specific vendors such as ILOG CPLEX and Axioma in the finance sector. In addition, users can breathe new life into their own decades-old libraries and applications that have lain dormant.

For example, the Star-P Connect library API link enables users to extend the functionality of the Star-P compute engine based on the particular application and algorithm requirements. Engineers can plug in existing serial and parallel libraries, access them via the desktop VHLL tools, and execute them in task- and data-parallel modes. Existing codes integrate nicely, with no user requirement regarding data distribution. Accessing these libraries through desktop tools is a boon to scientists and engineers who may not have previously been able to leverage these C, Fortran, and MPI codes.

4: Leveraging Hardware Accelerators
The final step is letting engineers take advantage of breakthroughs such as device hardware accelerators and parallel I/O. Hardware accelerators such as FPGAs and GPUs from vendors such as XtremeData and ClearSpeed give technical computing users significant computation, I/O, and memory bandwidth advantages over traditional CPU-only solutions. And with tools such as Star-P Connect, the compute-intensive algorithms embedded in hardware appear as standard library functions that can be easily called from high-level desktop applications.

By eliminating parallel reprogramming, engineers can dramatically accelerate prototyping and time to market while cutting labor. Development teams can make better decisions and develop better products. Tools and techniques — automated and user driven — for optimizing existing desktop models will enable engineers to explore next-generation problems that have not been possible to date, which will result in more rapid innovation than has been seen in the last decade.

Incorporating a vast array of previously developed libraries, solvers, and algorithms, and access to specialized hardware accelerators is icing on the cake. It will take productivity of the technical community to the next level of discovery by dismantling programming barriers and enabling more people to tap the power of high-performance computing resources across organizations.
 
Contact Information
 
Axioma
New York, NY
axiomainc.com
 
Argonne National Laboratory
Argonne, IL
anl.gov
 
ClearSpeed
San Jose, CA
clearspeed.com
 
ILOG CPLEX
Mountain View, CA
ilog.com
 
Star-P, Star-P Connect
Interactive Supercomputing
Waltham, MA
interactivesupercomputing.com
 
MATLAB, Dist. Comp. Toolbox
The MathWorks, Inc.
Natick, MA
mathworks.com
 
Excel
Microsoft
Redmond, WA
microsoft.com
 
Numerical Algorithms Group
Oxford, UK
nag.co.uk
 
Python Software Organization
python.org
 
 
Trilinos
Sandia National Laboratories
Albuquerque, NM
sandia.gov
 
TotalView
Natick, MA
totalviewtech.com
 
SPM2
University College of London
London, UK
fil.ion.ucl.ac.uk/spm/software/spm2/
 
Visual Numerics, Inc.
Houston, TX
vni.com
 
Grid Mathematica, Mathematica
Wolfram Research, Inc.
Champaign, IL
wolfram.com
 
XtremeData, Inc.
Chicago, IL
xtremedatainc.com
 


Ilya Mirman is a VP at Interactive Supercomputing (ISC). Prior to joining ISC, he was a VP at SolidWorks. Mirman has a B.S. in mechanical engineering from the University of Massachusetts, an M.S. in mechanical engineering from Stanford, and an MBA from MIT’s Sloan School. Send your feedback about this article here.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#9536