Locking concepts

There are several forms of synchronization in software; one of the commonly encountered ones, and indeed one that we shall be working with quite a bit, is callelocking. A lock, in programming terms, and as seen by the application developer, is ultimately a data structure instantiated as a variable.

When one requires a critical section, just encapsulate the code of the critical section between a lock and a corresponding unlock operation. (For now, don't worry about the code-level API details; we shall cover that later. Here, we are just focusing on getting the concepts right.)

Let's represent the critical section, along with the synchronization mechanism—a lock— using a diagram (a superset of the preceding Figure 3):

Fig 5: Critical section with locking

The basic premise of a lock is as follows:

  • Only one thread can hold or own a lock at any given point in time; that thread is the owner of the lock.
  • Upon the unlock, when more than one thread attempts to get or take the lock, the kernel will guarantee that exactly one thread will get the lock.
  • The thread that gets the lock is called the winner (or the lock owner); the threads that tried for but did not get the lock are called the losers.

So, visualize this: say that we have three threads, A, B, and C, running in parallel on different CPU cores, all attempting to take a lock. The guarantee of the lock is that exactly one thread gets it—let's say that thread C  wins, taking the lock (thus thread C is the winner or owner of the lock); threads A and B are the losers. What happens after that?

  • The winner thread sees the lock operation as a non-blocking call; it continues into the critical section (probably working on some shared writable resource, such as global data).
  • The loser threads see the lock operation as a blocking call; they now block (wait), but on what exactly? (Recall that a blocking call is one in which we wait upon an event occurring and get unblocked once it occurs.) Well, the unlock operation, of course!
  • The winner thread, upon (atomically) completing the critical section, performs the unlock operation.
  • Either thread A or B will now get the lock, and the whole sequence repeats.

In a more generic manner, we can now understand it as: if N threads are in competition for a lock, the guarantee of the lock operation (by the OS) is that exactly one thread—the winner—will get the lock. So, we shall have one winner and N-1 losers. The winner thread proceeds into the code of the critical section; in the interim, all the N-1 loser threads wait (block) upon the unlock operation. At some point in the future (hopefully soon), the winner performs the unlock; this re-triggers the whole sequence again: the N-1 losers again compete for the lock; we shall have one winner and N-2 losers; the winner thread proceeds into the code of the critical section. In the interim, all the N-2 loser threads wait (block) upon the unlock operation and so on, until all the loser threads have become winners and have hence run the code of the critical section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset