On a VM with 1 GB RAM, one CPU core and sequential make -j1

We configure the guest VM to have only one processor, clean up the build directory, and proceed once more, but this time with a sequential build (by specifying make -j1):

$ cd <linux-4.17-kernel-src-dir>
$ perf stat make
V=0 -j1 ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- all
scripts/kconfig/conf --syncconfig Kconfig
SYSHDR arch/arm/include/generated/uapi/asm/unistd-common.h
SYSHDR arch/arm/include/generated/uapi/asm/unistd-oabi.h
SYSHDR arch/arm/include/generated/uapi/asm/unistd-eabi.h
CHK include/config/kernel.release
UPD include/config/kernel.release
WRAP arch/arm/include/generated/uapi/asm/bitsperlong.h

[...] << lots of output >>

CC crypto/hmac.mod.o
LD [M] crypto/hmac.ko
CC crypto/jitterentropy_rng.mod.o
LD [M] crypto/jitterentropy_rng.ko
CC crypto/sha256_generic.mod.o
LD [M] crypto/sha256_generic.ko
CC drivers/video/backlight/lcd.mod.o
LD [M] drivers/video/backlight/lcd.ko

Performance counter stats for 'make V=0 -j1 ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- all':

1031535.713905 task-clock (msec) # 0.837 CPUs utilized
1,78,172 context-switches # 0.173 K/sec
0 cpu-migrations # 0.000 K/sec
2,13,29,573 page-faults # 0.021 M/sec
<not supported> cycles
<not supported> instructions
<not supported> branches
<not supported> branch-misses

1232.146348757 seconds time elapsed
$

The build took a total time of approximately 1232 seconds (20.5 min), which is nearly twice as long as the previous build!

You might be asking this question: so, if the build with one process took around 20 minutes and the same build with multiple processes took approximately half the time, why use multithreading at all? Multiprocessing seems to be as good!

No, please think: our very first example regarding process versus thread creation/destruction taught us that spawning (and terminating) processes is much slower than doing the same with threads. That is still a key advantage that many applications exploit. After all, threads are far more efficient than processes in terms of creation and destruction.

In a dynamic, unpredictable environment, where we do not know in advance how much work will be required, the use of multithreading to be able to quickly create worker threads (and quickly have them terminated) is very important. Think of the famous Apache web server: it's multithreaded by default (via its mpm_worker module in order to quickly serve client requests). In a similar fashion, the modern NGINX web server uses thread pools (more on this for those interested can be found in the Further reading section on the GitHub repository).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset