When to use parallel processing

Concurrency means stopping one task to work on another. With a coroutine, the function stops execution and waits for more input to continue. In this sense, you can have several operations pending at the same time; the computer simply switches to the next one when it is time.

This is where multitasking in operating systems comes from: a single CPU can handle multiple jobs at the same time by switching between them. In simple terms, concurrency is when multiple threads are being processed during a given time period. In contrast, parallelism means the system runs two or more threads simultaneously; that is, multiple threads are processed at a given point in time. This can only occur when there is more than one CPU core available.

The benefit of parallelizing code comes from doing more with less. In this case, it's doing more work with fewer CPU cycles. Before multi-core systems, the only real way to improve performance was to increase the clock speed on the computer, allowing the system to do more work in a given amount of time. As thermal limitations became a problem with higher CPU frequencies, manufacturers found that adding more cores and reducing the frequency could provide similar benefits without overheating the system and reducing energy usage, something vital in portable devices. Depending on the task, splitting a job into multiple, smaller jobs could actually be quicker on a multi-core device than increasing the clock speed.

The biggest problem with making parallel programs is figuring out when parallelism will help. Not all tasks need the boost, and sometimes you can actually make things slower if you try to use parallel programming. While there are certain types of problems than can be looked at and a determination made, in this author's experience, you sometimes just have to try it out and see what happens.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset