Summary

The example code in this chapter was designed for the demonstration of the parallel data processing capabilities of modern Intel-based processors. Of course, the technology being used herein is far from able to provide the power of architectures such as CUDA, but it is definitely able to significantly speed up certain algorithms. While the algorithm we worked on here is very simple and hardly requires any optimization at all, as it could be implemented with FPU instructions alone and we would hardly notice any difference, it still illustrates the way in which multiple data may be processed simultaneously. A much better application could be solving an n-body problem, as SSE allows simultaneous computation of all vectors in a 3 dimensional space or even the implementation of a multilayer perceptron (one of many types of artificial neural networks) as it could have made it possible to process several neurons at once or; if the network is small enough, host them all in available XMM registers without the need to move data from/to memory. Especially keeping in mind the fact that sometimes procedures that seem to be quite complex, when implemented with SSE, may still be faster than a single FPU instruction.

Now that we know about at least one technology that may make our life easier, we will learn about the way assemblers can, if not simplify, then definitely ease the work of Assembly developer--macro instructions. Similar to macros in C or any other programming language supporting such features, macro instructions can have a significantly positive impact, allowing the replacement of a series of instructions with a single macro instruction, iteratively and/or conditionally assemble or skip certain sequences, or even create new instructions if the assembler is not supporting instructions we need (never happened to me yet, but "never say never").

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset