1.1 A shift toward parallel hardware

The mainstream computing architectures of the past few decades focused on executing a single thread of sequential instructions faster. That led to an application of Moore's Law to computing performance: processor performance per unit cost has doubled roughly every eighteen months for the last twenty years, and developers counted on that trend to ensure that their increasingly complex programs performed well.[4]

Moore's Law has been remarkably accurate in predicting processor performance, and it is reasonable to expect processor computing capacity to double every one-and-a-half years for at least another decade. To make that increase practical, however, chip designers had to implement a major shift in their design focus in recent years. Instead of trying to improve the clock cycles dedicated to executing a single thread of instructions, new processor designs let you execute many concurrent instruction threads on a single chip. While the clock speed of each computing core on a chip is expected to improve only marginally over the next few years, processors with dozens of cores are already showing up in commodity servers, and multicore chips are the norm even in inexpensive desktops and notebooks.

This shift in the design of high-volume, commodity processor architectures, such as the Intel x86, has at least two ramifications for developers. First, because individual core clock cycles will increase only modestly, we will need to pay renewed attention to the algorithmic efficiency of sequential code. Second, and more important in the context of actors, we will need to design programs that take maximum advantage of available processor cores. In other words, we not only need to write programs that work correctly on concurrent hardware, but also design programs that opportunistically scale to all available processing units or cores.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset