Can We Abstract Away From Multiple Cores?

Peter Varhol says that NetKernel from 1060 Research offers the ability to make use of multiple processor cores without doing anything exotic.

Peter Varhol says that NetKernel from 1060 Research offers the ability to make use of multiple processor cores without doing anything exotic.

By Peter Varhol

 

Readers know that I’m very interested in how software developers are adapting to programming for multicore systems. Traditional programs are written to be single-threaded, in that they execute code sequentially. Taking advantage of multiple cores, and multiple processors in individual systems, is a technically difficult endeavor that most application programmers don’t even attempt.

You might think that it is easy: Web applications with multiple simultaneous users do it all the time, right? Not necessarily. In some cases, the underlying application server can dispatch multiple threads that may be scheduled on different cores, but those threads clearly have to be independent so that there is no possibility of a race condition or other multiprocessing error.

1060 Research is releasing some interesting benchmark results on the scalability of the company’s NetKernel framework. NetKernel practices what 1060 Research calls Resource-Oriented Computing, a type of Representational State Transfer, or REST. In a nutshell, everything that constitutes an application is considered and treated as a resource, accessible through a URI.

The benchmark results, which can be found here, are fascinating from the standpoint of multicore programming. They demonstrate that NetKernel response time scales linearly across processors and cores as workload increases. This immediately indicates that NetKernel can make effective use of multiprocessor and multicore systems, without the need for developers to do anything different. That, in my mind, is a very big thing.

The other interesting point that the 1060 Research folks make is the linearly scaling in and of itself. Performance degrades very predictably once the system is fully utilized (at close to 100 percent CPU utilization and throughput).

I asked Randy Kahle of 1060 Research about how response time can scale linearly with a fully loaded system. His response: “This is actually a key finding. The response time is constant as long as there is a core waiting to process the request. Once cores are saturated, then their availability is smoothly allocated to requests. The fact that this is linear shows that there is a smooth allocation of work.”

What does this mean for users of large-scale multiprocessor and multicore servers? NetKernel takes care of what appears to be an efficient allocation of workload between CPUs and cores. This isn’t quite the same as writing multicore code within the context of a single application, but it’s the next-best thing.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Peter Varhol

Contributing Editor Peter Varhol covers the HPC and IT beat for Digital Engineering. His expertise is software development, math systems, and systems management. You can reach him at [email protected].

Follow DE
#4925