Wednesday 11 February 2009

The Multi-Core Dilemma - By Patrick Leonard

By Steve Pitzel (Intel) (16 posts) on March 14, 2007 at 8:27 pm

Guest Blogger Bio: Patrick Leonard

Hardware Evolution

Throughout the history of modern computing, enterprise application developers have been able to rely on new hardware to deliver significant performance improvements while actually reducing costs at the same time. Unfortunately, increasing difficulty with heat and power consumption along with the limits imposed by quantum physics has made this progression increasingly less feasible.

There is good news. Hardware vendors recognized this several years ago, and have introduced multi-core hardware architectures as a strategy for continuing to increase computing power without having to make ever smaller circuits.

Sounds Good, So What's the Dilemma?

The "dilemma" is this: a large percentage of mission-critical enterprise applications will not "automagically" run faster on multi-core servers. In fact, many will actually run slower.

There are two main reasons for this:

  1. The clock speed for each "core" in the processor is slower than previous generations. This is done primarily to manage power consumption and heat dissipation. For example, a processor with a single core from a few years ago that ran at 3.0 Ghz is being replaced with a dual or quad-core processor with each core running in the neighborhood of 2.6 Ghz. More total processing power, but each one is a bit slower.
  1. Most enterprise applications are not programmed to be multi-threaded. A single-threaded application cannot take advantage of the additional cores in the multi-core processor without sacrificing ordered processing. The result is idle processing time on the additional cores. Multi-threaded software should do better, but many people are finding that their multi-threaded code behaves differently in a multi-core environment than it did on a single core, so even these should be tested.

Won't my application server or operating system take care of this for me?

One of the key considerations here is the order of processing. A single threaded application that needs to ensure that A happens before B cannot run multiple instances concurrently on multiple cores and still ensure a particular order.

Application servers and operating systems are generally multi-threaded themselves, but unfortunately their multi-threaded nature does not necessarily extend to the applications that run on them. The app server and OS don't know what the proper order is for your particular business logic unless you write code to tell them. In fact, they are designed to simply process any thread as soon as possible, potentially disastrous in a business application. SMP (symmetric multiprocessing) has similar limitations.

So we are back to the same problem -- how to run multiple instances concurrently on multiple cores and still ensure a particular order.

Intel's 45nm announcement

Intel recently announced that they will have chips in the near future with 45nm features, a significant advance from the 60-65nm that is prevalent today. The company has also made it clear that this is not reducing the need for multi-core.

Around the same time as this announcement, Intel announced that they have an 80 core processor in the works. Power and heat will have to be addressed for a processor with 80 cores to come to market. So 45nm may mean some increase in clock speeds for future processors, but its primary use will be enablement of a higher number of cores.

Concurrent Computing

There is no easy solution, but there are several options, and they all involve bringing concurrency to your software. Concurrent computing (or parallel programming, as many refer to it) is likely to be a very hot topic in the coming years, so it's a good idea to start preparing now.

Since multi-core servers already make up most of the new servers shipments, concurrent computing in the enterprise will quickly become a way of life. So we need to put some thought into two things: how to make existing applications run concurrently, and how to build new systems for concurrency.

More people are talking now than any time in recent memory about how to do multi-threaded software development as the primary answer to concurrency. However, instead of writing our application code to be multi-threaded, we should consider how to abstract threading out of application code. Multi-threaded code is difficult to write and difficult to test, which is why many people have avoided it in the first place.

At Rogue Wave Software we have been working for several years in the area of "Software Pipelines". Software Pipelines* is an approach that can be used to abstract the threading model out of the application code. As software developers, you would not mix your UI code with your business logic, and for good reasons. A similar principle should apply for programming for concurrency -- the threading model should not be driven from within the application logic.

There are several important benefits to this approach. Removing threading from application code means:· The application developer doesn't have to own the threading model· Existing applications can move into a concurrent environment with much less effort· Makes it easier to scale to additional computing resources without modifying the application· If done right, the application can continually be tuned for performance without modifying application code

This approach does not allow the application developer to wash their hands entirely of concurrency. Application code needs to be written to be thread-aware, but does not need to have threads written into it. For more information on Software Pipelines, you can read this white paper (you have to log in to webservices.org to download it).

Consider how to abstract your threading model from your application logic, and you may find a smoother (concurrent) path ahead.

* Software Pipelines is a general term, not owned or trademarked by Rogue Wave or anyone as far as I'm aware. It also does not require the use of any of our technology. Software Pipelines borrows conceptually from hardware pipelines and also from fluid dynamics, which has interesting parallels to software systems.

Categories: Multi-Core

No comments: