Locking in MongoDB

Document-level and collection-level locking is mentioned throughout this chapter and also in several other chapters in this book. It is important to understand how locking works and why it is important.

Database systems use the concept of locks to achieve ACID properties. When there are multiple read or write requests coming in parallel, we need to lock our data so that all readers and writers have consistent and predictable results.

MongoDB uses multi-granularity locking. The available granularity levels in descending order are as follows:

  • Global
  • Database
  • Collection
  • Document

The locks that MongoDB and other databases use are the following, in order of granularity:

  • IS: Intent shared
  • IX: Intent exclusive
  • S: Shared
  • X: Exclusive

If we use locking at a granularity level with S or X locks, then all higher levels need to be locked with an intent lock of the same type.

Other rules for locks are as follows:

  • A single database can simultaneously be locked in IS and IX mode
  • An exclusive (X) lock cannot coexist with any other lock
  • A shared (S) lock can only coexist with IS locks

Reads and writes requesting locks are generally queued in first-in, first-out (FIFO) order. The only optimization that MongoDB will actually do is reordering requests according to the next request in queue to be serviced.

What this means is that if we have an IS(1) request coming up next and our current queue has the following IS(1)->IS(2)->X(3)->S(4)->IS(5) as shown in the following screenshot:

Then MongoDB will reorder requests like this, IS(1)->IS(2)->S(4)->IS(5)->X(3) as shown in the following screenshot:

If, during servicing, the IS(1) request, new IS, or S requests come in, let's say IS(6) and S(7), in that order, they will still be added at the end of the queue and won't be considered until the X(3) request is done with.

Our new queue will now look like IS(2)->S(4)->IS(5)->X(3)->IS(6)->S(7):

This is done to prevent the starvation of the X(3) request that would end up getting pushed back in the queue all the time because new IS and S requests come in. It is important to understand the difference between intent locks and locks themselves. WiredTiger storage engine will only use intent locks for global, database, and collection levels. 

It uses intent locks at higher levels (that is, collection, database, global), when a new request comes in and according to the compatibility matrix as follows:

MongoDB will first acquire intention locks in all ancestors before acquiring the lock on the document itself. This way, when a new request comes in, it can quickly identify if it cannot be serviced based on less granular locks.

WiredTiger will use S and X locks at the document level. The only exception to that is for typically infrequent and/or short-lived operations involving multiple databases. These will still require a global lock, similar to the behavior MongoDB had in pre 2.x versions.

Administrative operations, such as dropping a collection, still require an exclusive database lock.

MMAPv1, as explained previously, uses collection-level locks. Operations that span a single collection, but may or may not be spanning a single document, will still lock up the entire collection. This is the main reason why WiredTiger is the preferred storage solution for all new deployments.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset