Foreword

Robert J. Harrison, Institute for Advanced Computational Science, Stony Brook University

I cannot think of a more exciting (that is to say, tumultuous) era in high performance computing since the introduction in the mid-80s of massively parallel computers such as the Intel iPSC and nCUBE, followed by the IBM SP and Beowulf clusters. But now, instead of benefiting only high-performance applications parallelized with MPI or other distributed memory models, the revolution is happening within a node and is benefiting everyone, whether running a laptop or multi-petaFLOP/s supercomputer. In sharp contrast to the GPGPUs that fired our collective imagination in the last decade, the Intel®Xeon Phi™ product family (its first product version already being one teraFLOP/s double-precision peak speed!) brings supercomputer performance right into everyone’s office while employing standard programing tools fully compatible with the desktop environment including a full suite of numerical software and the entire GNU/Linux stack. With both architectural and software unity from multi-core Intel® Architecture processors to many-core Intel Xeon Phi products we have for the first time a holistic path for portable, high-performance computing that is based upon a familiar and proven threaded, scalar-vector programming model. And Intel’s vision for Intel® Many Integrated Core (Intel® MIC) architecture takes us from the early petaFLOP era in 2012 into the exaFLOP era in 2020–indeed, the Intel Xeon Phi coprocessor based on Intel MIC architecture is credibly a glimpse into that future.

So what’s the catch? There’s actually no new news here—sequential (more specifically singled-threaded and non-vectorized) computation is dead even in the desktop. Long dead. Pipelined functional units, multiple instruction issue, SIMD extensions, and multi-core architectures killed that years ago. But if you have one of the 99 percent of applications that are not yet both multi-threaded and vectorized, then on multicore Intel Xeon with AVX SIMD units you could be missing a factor of up to 100x in performance, and the highly-threaded Intel MIC architecture implies a factor of up to 1000x. Yes, you are reading those numbers correctly. A scalar, single-threaded application, depending on what is limiting its performance, could be leaving several orders of magnitude in single-socket performance on the table. All modern processors, whether CPU or GPGPU, require very large amounts of parallelism to attain high performance. Some good news on Intel Xeon Phi coprocessors is that with your application running out of the box thanks to the standard programming environment (again in contrast to GPGPUs that require recoding even to run), you can use familiar tools to analyze performance and have a robust path for incremental transformations to optimize the code, and those optimizations will carry directly over to mainstream processors. But how to optimize the code? Which algorithms, data structures, numerical representations, loop constructs, languages, compilers, and so on, are a good match for Intel Xeon Phi products? And how to do all of this in a way that is not necessarily specific to the current Intel MIC architecture but instead positions you for future and even non-Intel architectures? This book generously and accessibly puts the answer to all of these questions and more into your hands.

A critical point is that early in the “killer-micro” revolution that replaced custom vector processors with commodity CPUs, application developers ceased to develop vectorized algorithms because the first generations of these CPUs could indeed attain a good fraction of peak performance on operation-rich sequential code. Fast forward two decades and this is now far from true but the passage of time has erased from our collective memory much of the wisdom and folklore of the vectorized algorithms so successful on Cray and other vector computers. However, the success of that era should give us great confidence that a multi-threaded, scalar-vector programming model supported by a rich vector instruction set is a great match for a very broad range of algorithms and disciplines. Needless to say, there are new challenges such as the deeper and more complex memory hierarchy of modern processors, an order of magnitude more threads, the lack of true hardware gather/scatter, and compilers still catching up with (rediscovering?) what was possible 25 years ago.

In October 2010, I gave a talk entitled “DSLs, Vectors, and Amnesia,” at an excellent workshop on language tools in Houston organized by John Mellor-Crummey. Amnesia referred to the loss of vectorization capabilities mentioned above. I used the example of rationalizing the bizarrely large speedups1 claimed in the early days of using GPGPUs as a platform to identify successes and failures in mainstream programming. Inconsistent optimization of applications on the two platforms is the trivial explanation, with the GPGPU realizing a much higher fraction of its peak speed. But why was this so, and what was to learn from this about writing code with portable performance? There were two reasons underlying the performance discrepancy. First, the data-parallel programming languages such as OpenCL and NVidia’s CUDA† forced programmers to write massively data-parallel code that, with a good compiler and some tuning of vector lengths and data layout, perfectly matched the underlying hardware and realized high performance. Second, the comparison was usually made against a scalar, non-threaded x86 code that we now understand to be far from optimal. The universal solution was to back-port the GPGPU code to the multi-core CPU with retuning for the different numbers of cores and vector/cache sizes—indeed with care and some luck the same code base could serve on both platforms (and certainly so if programming with OpenCL). All of the optimizations for locality, bandwidth, concurrency, and vectorization carry straight over, and the cleaner, simplified, dependency-free code is more readily analyzed by compilers. Thus, there is every reason to expect that nearly all algorithms that work well on current GPGPUs can with a minimal amount of restructuring execute equally well on the Intel Xeon Phi coprocessor, and that algorithms requiring fine-grain concurrent control should be significantly easier to express on the coprocessor than on GPGPU.

In reading this book you will come to know its authors. Through the Intel MIC architecture early customer enabling program, I have worked with Jim Jeffers who in addition to being articulate, clear thinking, and genial, is an expert well worth your time and the price of this book. Moreover, he and the other leaders of the Intel Xeon Phi product development program truly sense that they are doing something significant and transformative that will shape the future of computing and is worth the huge commitment of their professional and personal life.

This book belongs on the bookshelf of every HPC professional. Not only does it successfully and accessibly teach us how to use and obtain high performance on the Intel MIC architecture, it is about much more than that. It takes us back to the universal fundamentals of high-performance computing including how to think and reason about the performance of algorithms mapped to modern architectures, and it puts into your hands powerful tools that will be useful for years to come.

October 2012


1Factors of over 100x in performance compared to conventional x86 CPUs that according to hardware metrics such as bandwidth and floating point speed were just 4–12x slower.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset