Parallel or Bust - Taking Advantage of Your New PC

By John Pasquarette

In the spring of 2005, PC makers changed the rules for PC software, introducing dual-core processor-based PCs, to usher in what is heralded as the greatest performance improvement in one step since the 386. Did you notice it?

The introduction of dual-core PCs not only improves performance but also requires software developers to overhaul their applications to take advantage of these PCs. Even though this fundamental change in the PC industry happened more than two years ago, many engineers and scientists are just now upgrading their laboratory computers or factory machines. And you know what some of them are finding? Their applications may not run any faster! Now what?

The performance race to 1, 2, and 3 GHz over the past 20 years was the single most visible specification that represented technical leadership in the processor wars. However, the challenges of driving more speed into smaller chips while managing power consumption and heat dissipation proved to be too much. Just as the end of Moore’s Law was about to be proclaimed, the processor vendors found a new approach — adding multiple cores in parallel.

This breakthrough, however, introduces a huge disruption in the software industry, forcing it to learn how to build multithreaded applications to take advantage of the new PC technology.

Multicore processors naturally improve performance in multitasking scenarios. For example, when consumers are surfing the Web, checking e-mail, and running multiple Microsoft Office applications, a multicore system can balance these separate applications among processor cores for a faster, more responsive experience for consumers.

Yet for many engineers and scientists, the PC is a core platform dedicated to running a single application — automating an experiment, running a test system, controlling a mission-critical process. Over the years, upgrading to newer computers with faster processors was key to accomplish more with PC-based systems. You now have to learn how to divide your application into threads so the OS can balance the workload across multiple processing cores.

As you can imagine, this represents a completely new challenge for many developers who write their applications in a standard, single-threaded approach. Intel is investing millions to educate software developers about multithreading so they can take advantage of new PC architectures. But multithreaded programming is difficult. In fact, there are more than 2,200 books available on amazon.com dedicated to multithreaded programming. On the Web, if you search for “multithreaded programming” on Google, there are more than 1.3 million results, with more than 1.2 million of them showing updates in the past year.

All of this research and documentation makes sense for professional programmers working on an application, video game, or database system — but for engineers and scientists, it is a huge impediment now to improving their computer-based systems.

The fundamental challenge in taking advantage of multicore architecture is that traditional tools are not designed for parallel architectures. It is very difficult to use a sequential programming language, like C or Basic, to build a parallel application. Hence, the need to introduce overwrought programming concepts such as threads to simulate parallelism.

Clearly, multicore systems require a new programming approach so we can take advantage of them. Parallel, or graphical, programming languages promise to deliver the benefits of multicore without the pain. One such language, LabVIEW from National Instruments, executes in a dataflow paradigm rather than sequentially. When users develop programs in NI LabVIEW, they connect functional blocks on a diagram with wires that represent data. The graphical approach provides a very intuitive method for representing parallelism — wires that are split and connected to different function blocks are automatically assigned to threads by the compiler. For years, users have been creating parallel applications with LabVIEW without even knowing — it happens naturally when they build their solutions as block diagrams.

With such programming languages, users can achieve 20 to 30 percent performance improvements by moving their applications from single to multicore systems. And they achieve even greater performance improvements with more focus on parallelism. The key is that graphical programs empower engineers and scientists to develop parallel applications much more easily than traditional sequential languages.

Multicore systems are here to stay — Intel has already announced early research on an 80-core processor chip. The challenges of programming multicore systems are here to stay as well. These challenges represent a new opportunity for modern tools to displace traditional programming tools that require too much extra work, particularly for engineers and scientists who want to take full advantage of the power in today’s PCs.


John Pasquarette is the director of software marketing for National Instruments. He received his B.S. in electrical engineering from Texas A&M University in 1989 and joined National Instruments in 1990 as an applications engineer. In his current role, Pasquarette is responsible for the worldwide promotion and positioning of the company’s software platform. Send comments about this commentary to DE-Editorsmailto:[email protected].

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#9753