.NET manual synchronization provides a rich set of synchronization locks familiar to any veteran Win32 programmer, such as monitors, events, mutexes, semaphores, and interlocks. .NET also introduces some language features that automate the use of locks, and a new lock type (the reader/writer lock). Manual synchronization is at the other end of the spectrum from automatic synchronization, on a number of dimensions. First, whereas a synchronization domain is in effect a mega-macro lock, manual synchronization offers fine-grained control over what is locked: you can control access to an object, its individual members, or even a single line of code. As a result, you can potentially improve overall application performance and throughput.
You can use manual synchronization on any .NET component, whether context-bound or not. Unlike with automatic synchronization, when you use manual synchronization, as the name implies, you explicitly manage the locking and unlocking
of the lock. Consequently, with this approach the Pandora’s box of synchronization is wide open, and unless you carefully design your manual-synchronization mechanism you can introduce deadlocks and other multithreading defects, such as object state corruption and hard-to-resolve race conditions. In this section, I will provide you (where appropriate) with design guidelines and rules of thumb that will allow you to apply manual synchronization productively.
A Monitor
is a lock designed to work with .NET reference types only. The Monitor
static class associates a lock with an object. While one thread owns the lock associated with that object, no other thread can acquire the lock. Monitor
provides only static methods, accepting the target object as a parameter.
The two most commonly used methods of Monitor
are Enter()
and Exit()
:
public static class Monitor { public static void Enter(object obj); public static void Exit(object obj); //Other methods }
Enter()
acquires a lock for an object and locks it; Exit()
unlocks it. A client that wants to access a non-thread-safe object calls Enter()
, specifying the object. The client uses the object and then calls Exit()
to allow other threads to access the object, as shown in Example 8-3.
Example 8-3. Using Monitor to control access to an object
public class MyClass { public void DoSomething() {...} //Class members } MyClass obj; //Some code to initialize obj; //This section is now thread-safe: Monitor.Enter(obj); obj.DoSomething(); Monitor.Exit(obj);
Any object, from any thread, can call Enter()
to try to access the object. If the monitor is owned by one thread, and a second thread tries to obtain that monitor, Enter()
blocks the second thread until the first thread calls Exit()
. If there is more than one pending thread, they are placed in a queue called the lock queue and served from the queue in order. There is no harm in calling Enter()
multiple times on the same object, provided you make a matching number of calls to Exit()
to release the lock. However, only the thread that called Enter()
can make the corresponding call to Exit()
. Trying to call Exit()
on a different thread results in an exception of type SynchronizationLockException
.
Monitor
also provides the TryEnter()
Boolean method, which allows a client thread to try to acquire the lock without being blocked:
public static bool TryEnter(object obj);
If the lock is available, TryEnter()
locks it and returns true
. If the lock is owned by another thread, TryEnter()
returns immediately to its caller, with a return value of false
.
TryEnter()
has two other overloaded methods that allow the client to specify a timeout to wait to acquire the lock if it’s owned by another thread:
public static bool TryEnter(object obj,int millisecondsTimeout); public static bool TryEnter(object obj,TimeSpan timeout);
TryEnter()
is of little use, because the calling client itself should be able to deal with an object that is unavailable. TryEnter()
is provided for the advanced and esoteric cases of high-throughput threads that can’t afford to block on nice-to-have operations.
I also recommend avoiding TryEnter()
because of a general design guideline when dealing with locks: always postpone acquiring a lock until the last possible moment, and release it as soon as possible. This reduces the likelihood of a deadlock and improves overall throughput. If you have something to do instead of acquiring a lock, do it and obtain the lock after that.
You can also use the Monitor
class to provide thread-safe access to static class methods or properties by giving it the type to lock instead of a particular instance, as shown in Example 8-4.
You must call Exit()
on the object you locked using Enter()
, even if an exception has been thrown; otherwise, the object will not be accessible by clients on other threads. Example 8-5 shows the same code as Example 8-3, this time with proper error handling
. Placing the call to Exit()
in the finally
statement ensures that Exit()
is called regardless of whether the try
statement encounters an exception.
Example 8-5. Using Monitor with error handling
public class MyClass { public void DoSomething() {...} //Class members } MyClass obj; //Some code to initialize obj; Monitor.Enter(obj); try { obj.DoSomething(); } finally { Monitor.Exit(obj); }
To ease the task of using Monitor
in conjunction with error handing, C# provides the lock
statement, which causes the compiler to automatically generate calls to Enter()
and Exit()
in a try/finally
statement. For example, when you write this code:
MyClass obj;
//Some code to initialize obj;
lock(obj)
{
obj.DoSomething();
}
the compiler replaces the lock
statement with the code shown in Example 8-5 instead. Visual Basic 2005 offers similar compiler support via the SyncLock
statement:
Public Class MyVBClass Sub DoSomething() ... End Sub 'Class members End Class Dim obj As MyVBClass 'Some code to initialize obj; SyncLock (obj) obj.DoSomething() End SyncLock
As when using raw Monitor
, you can use the lock
statement to protect a static method or access to a static member by providing a type instead of an instance:
public class MyClass
{
static public void DoSomething()
{...}
//Static class members
}
lock(typeof(MyClass))
{
MyClass.DoSomething();
}
In cases where you need to lock multiple objects, avoid nesting or stacking multiple lock
statements:
MyClass obj1 = new MyClass(); MyClass obj2 = new MyClass(); MyClass obj3 = new MyClass(); lock(obj1) lock(obj2) lock(obj3) { //Use the objects }
The compiler will allow you to stack the lock
statements, but this is inherently dangerous—if another thread is also using multiple lock
statements to acquire the same objects, but in a different order, you will end up with a deadlock. When multiple locking is required, use WaitHandle
(described later).
Using a Monitor
object to protect an object from concurrent access, as shown in Example 8-5, is asking for trouble. The reason is simple: using Monitor
is done at the discretion of the client developer. Nothing prevents a client from ignoring other threads and accessing an object directly, without using a lock. Of course, such an action will cause a conflict if another thread is currently accessing the object, even if that thread was disciplined enough to use a lock. To guard against undisciplined access, encapsulate the lock inside the component and use Monitor
on every public method and property, passing this
as the object to lock. Encapsulating the lock is the classic object-oriented solution for thread-safe objects. Example 8-6 demonstrates this solution.
Example 8-6. Encapsulating Monitor to promote loose coupling and thread safety
public class MyClass { /* Class members */ public void DoSomething() { lock(this) { //Do something } } } MyClass obj; //Some code to initialize obj; obj.DoSomething();
Encapsulating the lock inside the method promotes loose coupling between the clients and the object, because now the clients don’t need to care about the object’s synchronization needs. It also promotes thread safety, because the method (or property) access is safe by definition.
You can also encapsulate the lock
statement for static methods or properties:
public class MyClass
{
static public void DoSomething()
{
lock(typeof(MyClass))
{
//Do something
}
}
//Static member variables
}
In general, exposing public class member variables directly is inadvisable, and it’s doubly so when multithreading is involved because there is no way to encapsulate the lock. At the very least, you should use properties that call lock(this)
to access member variables.
Note that when you use the lock
statement in a method that returns a value (such as a property), there is no need to use a temporary variable, because the finally
block executes after the return
instruction:
public class MyClass { int m_Number; public int Number { get { lock(this) { return m_Number; } } } }
You should also provide for thread-safe, encapsulated access to structures. The problem with structures is that they are value types, and therefore you cannot pass a structure itself to Monitor
. To overcome this problem, add to your structure a do-nothing object
member variable and use that as the reference type for the lock
statement, as shown in Example 8-7.
Example 8-7. Encapsulated thread-safe access to a structure using a member object
public interface ICloneable<T> { T Clone(); } public struct MyStruct : ICloneable<MyStruct> { int m_Number; object m_Lock; public MyStruct(int number) { m_Number = number; m_Lock = new object(); } public int Number { set { Debug.Assert(m_Lock != null); lock(m_Lock) { m_Number = value; } } get { Debug.Assert(m_Lock != null); lock(m_Lock) { return m_Number; } } } public MyStruct Clone() { MyStruct clone = new MyStruct(Number); return clone; } } MyStruct myStruct = new MyStruct(3); //This is thread-safe and encapsulated access myStruct.Number = 4;
A problem specific to this use of safe structures is that all copies of the structure will share the lock. In the following example, locking struct1
will also lock struct2
:
MyStruct struct1 = new MyStruct(3); MyStruct struct2 = struct1
To get around that, you need to make a deep copy of the structure when assigning it. Since you cannot overload the assignment operator, you can clone the structure instead:
MyStruct struct1 = new MyStruct(3); MyStruct struct2 = struct1.Clone();
This is why MyStruct
in Example 8-7 implements the interface ICloneable<T>
.
The next problem is that you can instantiate a struct using its default constructor, without going through the parameterized one that allocates the lock:
MyStruct myStruct = new MyStruct();
To compensate for that, the properties assert that the lock is valid:
Debug.Assert(m_Lock != null);
Given the difficulties in providing problem-free thread-safe structures, I believe that structures should be used predominantly in a single-threaded environment, and that you should consider the use of classes instead of structures when confronted with these problems.
You can use a Monitor
to provide for thread-safe access to members of a generic type parameter:
public class MyClass<T> { T m_T; public void SomeMethod() { lock(m_T) { //Use m_T } } }
However, in general you should not use a Monitor
on generic type parameters. Monitors can be used only with reference types, and when you use generic types the compiler cannot tell in advance whether you will provide a reference or a value type parameter. It will let you use the Monitor
, but if you provide a value type as the type parameter, the Monitor.Enter()
call will have no effect at runtime. The only times when you can safely lock the generic type parameter are when you can constrain it to be a reference type:
public class MyClass<T> where T :class
{...}
or to have a default constructor:
public class MyClass<T> where T :new()
{...}
or to derive from a base class:
public class SomeClass
{...}
public class MyClass<T> where T :SomeClass
{...}
Consequently, when using generics it is much better to lock the entire object instead:
public class MyClass<T> { T m_T; public void SomeMethod() { lock(this) {...} } }
This also has the benefit of reducing the likelihood of a deadlock in the event that you need to lock additional members—locking the whole object grants safe access to all members in one atomic operation.
When you use Monitor
, you can protect individual code sections of a method and thus have a fine level of control over when you lock the object for access. You need to lock the object only when you access its member variables. All other code that uses local stack-allocated variables is thread-safe by definition:
public class MyClass { int m_Number = 0; public void DoSomething() { //This loop doesn't need to lock the object for(int i=0;i<10;i++) { Trace.WriteLine(i); } //Lock the object because state is being accessed lock(this) { Trace.WriteLine("Number is " + m_Number); } } }
As a result, you can interweave locked code with unlocked code as required. This is called fragmented locking . However, fragmented locking is a serious liability, because it results in error-prone code. During code maintenance, someone might add access to a member variable (either directly or by using a method) in a non-synchronized code section, which can result in object state corruption. Consequently, you should avoid fragmented locking and opt instead for locking the object for the duration of the method:
public class MyClass { int m_Number = 0; public void DoSomething() { lock(this) { for(int i=0;i<10;i++) { Trace.WriteLine(i); } Trace.WriteLine("Number is " + m_Number); } } }
Because this programming model is so common, .NET has built-in compiler support for it, called synchronized methods. The MethodImpl
method attribute, defined in the System.Runtime.CompilerServices
namespace, accepts an enum of type MethodImplOptions
. One of the enum values is MethodImplOptions.Synchronized
. When the MethodImpl
method attribute is applied on a method with that enum value, the compiler instructs the .NET runtime to lock the object on method entry and unlock it on exit. This is semantically equivalent to encasing the method code with a lock
statement. For example, consider this method definition:
using System.Runtime.CompilerServices; public class MyClass { [MethodImpl(MethodImplOptions.Synchronized)] public void DoSomething() { /* Method code */ } //Class members }
This method is semantically identical to the following:
public class MyClass { public void DoSomething() { lock(this) { /* Method code */ } } //Class members }
You can use the MethodImpl
method attribute on static methods as well, and even on properties:
using System.Runtime.CompilerServices; public class MyClass { int m_Number = 0; int Number { [MethodImpl(MethodImplOptions.Synchronized)] get { return m_Number; } [MethodImpl(MethodImplOptions.Synchronized)] set { m_Number = value; } } }
The difference between synchronized methods and synchronization domains is that synchronized methods can lead to deadlocks. Imagine two objects, each servicing a client on a different thread and each having a reference to the other. If they try to access each other, as in Figure 8-2, the result is a deadlock. Had the two objects been part of a synchronization domain, the first thread to access one of the objects would have locked the other object as well, thereby avoiding a deadlock.
In some cases, after acquiring the lock associated with an object, you may want to release the lock and wait until another thread has accessed the object. Once the other thread is done with the object it can then signal your thread, at which point you want to reacquire the lock and continue executing.
To allow for this, the Monitor
class provides the Wait()
method:
public static bool Wait(object obj);
Wait()
releases the lock and waits for another thread to call Monitor
. You can call Wait()
on an object only after you have locked it by calling Monitor.Enter()
on it and only within that synchronized code section:
public void LockAndWait(object obj) { lock(obj) { //Do some work, then wait for another thread to do its work Monitor.Wait(obj); /* This code is executed only after the signaling thread releases the lock */ } }
While your thread is in the call to Wait()
, other threads can call Enter()
on the object and then call Wait()
, so you can end up with multiple threads all waiting for a signal. All these pending threads are placed in a dedicated queue associated with the lock, called the wait queue
. To unlock a waiting thread, a different thread must acquire the lock and then use the Pulse()
or PulseAll() Monitor
methods:
public static void Pulse(object obj); public static void PulseAll(object obj);
You can call Pulse()
on an object only if you own its monitor:
public void LockAndPulse(object obj) { lock(obj) { //Do some work, then pulse one other thread to continue its work Monitor.Pulse(obj); /* This code is still the owner of the lock */ } }
Pulse()
removes the first thread from the wait queue and adds it to the end of the lock queue (the same queue that handles multiple concurrent calls to Enter()
). Only when the pulsing thread calls Exit()
does the next thread from the lock queue get to run, and it may not necessarily be the thread that was pulsed out of the wait queue. If you want to pulse all the waiting threads out of the wait queue, you can use the PulseAll()
method. When PulseAll()
is called all waiting threads are moved to the lock queue, where they continue to execute in order.
The Wait()
method blocks the calling thread indefinitely. However, there are four overloaded versions of the Wait()
method that accept a timeout and return a Boolean value:
public static bool Wait(object obj,int millisecondsTimeout); public static bool Wait(object obj,int millisecondsTimeout, bool exitContext); public static bool Wait(object obj,TimeSpan timeout); public static bool Wait(object obj,TimeSpan timeout, bool exitContext);
The Boolean value lets you know whether Wait()
has returned because the specified timeout has expired (false
) or because the lock has been reacquired (true
). Note that the Wait()
version that doesn’t accept a timeout also returns a Boolean value, but that value will always be true
(that version should have had a void
return value).
The interesting parameter of the overloaded Wait()
method is bool exitContext
. If Wait()
is called from inside a synchronization domain, and if you pass in true
for exitContext
, .NET exits the synchronization domain before blocking and waiting. This allows other threads to enter the synchronization domain. Once the Monitor
object is signaled, the thread has to enter the domain and own the Monitor
lock in order to run. The default behavior in the Wait()
versions that don’t use this parameter isn’t to exit the synchronization domain.
Note that you can still block indefinitely using one of the overloaded methods that accepts a TimeSpan
parameter, by passing the Infinite
static constant of the Timeout
class:
Monitor.Wait(obj,Timeout.Infinite);
As explained earlier in this chapter, calling Thread.Interrupt()
will unblock a thread from a Monitor
call to Enter()
, TryEnter()
, or Wait()
and throw an exception of type ThreadInterruptedException
in the unblocked thread (see the section "Interrupting a waiting thread“).
In Windows, you can synchronize access to data and objects using a waitable handle to a system-provided lock. You pass the handle to a set of Win32 API calls to block your thread or signal to other threads waiting on the handle. The class WaitHandle
provides a managed-code representation of a native Win32 waitable handle. The WaitHandle
class is defined as:
public abstract class WaitHandle : MarshalByRefObject,IDisposable { public WaitHandle(); public static const int WaitTimeout; public SafeWaitHandle SafeWaitHandle{get; set;} public virtual void Close(); public static bool WaitAll(WaitHandle[] waitHandles); public static bool WaitAll(WaitHandle[] waitHandles, int millisecondsTimeout, bool exitContext); public static bool WaitAll(WaitHandle[] waitHandles, TimeSpan timeout, bool exitContext); public static int WaitAny(WaitHandle[] waitHandles); public static int WaitAny(WaitHandle[] waitHandles, int millisecondsTimeout, bool exitContext); public static int WaitAny(WaitHandle[] waitHandles, TimeSpan timeout, bool exitContext); public virtual bool WaitOne(); public virtual bool WaitOne(int millisecondsTimeout, bool exitContext); public virtual bool WaitOne(TimeSpan timeout, bool exitContext); public static bool SignalAndWait(WaitHandle toSignal, WaitHandle toWaitOn); public static bool SignalAndWait(WaitHandle toSignal, WaitHandle toWaitOn, int millisecondsTimeout, bool exitContext); public static bool SignalAndWait(WaitHandle toSignal, WaitHandle toWaitOn, TimeSpan timeout, bool exitContext); }
WaitHandle
either signals an event between one or more threads or protects a resource from concurrent access. WaitHandle
is an abstract class, so you can’t instantiate WaitHandle
objects. Instead, you create a specific subclass of WaitHandle
, such as a Mutex
. This design provides a common base class for the different locks, so you can wait on a lock or a set of locks in a polymorphic manner, without caring about the actual types.
A WaitHandle
object has two states: signaled and non-signaled. In the non-signaled state, any thread that tries to wait on the handle is blocked until the state of the handle changes to signaled. The waiting methods are defined in the WaitHandle
base class, while the signaling operation is defined in the subclasses.
Finally, note that WaitHandle
is derived from the class MarshalByRefObject
. As you will see in Chapter 10, this allows you to pass WaitHandle
objects as method parameters between app domains or even machines.
WaitHandle
is used frequently in the .NET Framework, and some .NET types provide a WaitHandle
object for you to wait on. For example, recall the IAsyncResult
interface described in Chapter 7. IAsyncResult
provides the AsyncWaitHandle
property of type WaitHandle
, which waits for one or more asynchronous method calls in progress.
WaitHandle
provides two types of pure waiting methods—single-handle wait and multiple-handles wait—and one combination of signaling and waiting. The single-handle wait allows you to wait on a single handle using one of the overloaded WaitOne()
methods. You have to instantiate a particular subclass of WaitHandle
and then call WaitOne()
on it. The thread calling WaitOne()
is blocked until some other thread uses a specific method on the subclass to signal it. You’ll see some examples of this later. The parameter-less WaitOne()
method blocks indefinitely, but you can use two of the overloaded versions to specify a timeout. If you do specify a timeout, and the timeout expires without the handle being signaled, WaitOne()
returns false
. You also can specify whether or not to exit a synchronization domain, as with the overloaded versions of the Wait()
method of the Monitor
class.
The WaitAll()
and WaitAny()
methods allow you to wait on a collection of WaitHandle
objects. Both are static methods, so you need to separately create an array of waitable handle objects and then wait for all or any one of them to be signaled (using WaitAll()
or WaitAny()
, respectively). Like WaitOne()
, the WaitAll()
and WaitAny()
methods by default wait indefinitely, unless you specify a timeout. Again, you can also specify whether or not you want to exit the synchronization domain. The WaitAll()
version that accepts a timeout returns true
if the handles were signaled before the timeout expired and false
if the timeout expired without the handles being signaled. WaitAny()
returns an integer index referencing the handle that was signaled. If the timeout expires before any of the handles WaitAny()
is waiting for are signaled, WaitAny()
returns WaitHandle.WaitTimeout
.
WaitHandle
also offers signal-and-wait methods, such as this one:
public static bool SignalAndWait(WaitHandle toSignal,WaitHandle toWaitOn);
These methods address the need of signaling one handle while waiting on another, and doing so in one atomic operation. Without these methods there would be no sure way of performing these two tasks at one point in time, because between the two calls there could be a context switch. SignalAndWait()
is instrumental in programming two threads to rendezvous at a particular point in time, as demonstrated later.
Once you’re done with the handle, you should call Close()
on it to close the underlying Windows handle. If you fail to do so, the handle will be closed during garbage collection.
WaitHandle
has one interesting property, called SafeWaitHandle
, defined as:
public SafeWaitHandle SafeWaitHandle{get; set;}
You can use SafeWaitHandle
to retrieve the underlying Windows handle associated with the lock object, find out if the handle is closed, and you can even force a handle value yourself. SafeWaitHandle
is provided for advanced interoperability scenarios with legacy code, in which you need to use a specific handle value that’s obtained by some propietary interoperation mechanism.
There are two key differences between using a Monitor
to wait for and signal an object and using a waitable handle. First, Monitor
can be used only on a reference type, whereas a waitable handle can synchronize access to value types as well as reference types. The second difference is the ability WaitHandle
provides to wait on multiple objects as one atomic operation. When you are trying to wait for multiple objects, it’s important to dispatch the wait request to the operating system as one atomic operation. As mentioned earlier, if this is done on an individual lock basis there’s a possibility of a deadlock occurring if a second thread tries to acquire the same set of locks, but in a different order.
The Mutex
is a WaitHandle
-derived class that ensures mutual exclusion of threads from a resource or code section. The Mutex
class is defined as:
public sealed class Mutex : WaitHandle
{
public Mutex();
public Mutex(bool initiallyOwned);
public Mutex(bool initiallyOwned, string name);
public Mutex(bool initiallyOwned, string name,
out bool createdNew);
public static Mutex OpenExisting(string name);
public void ReleaseMutex();
//Security members
}
An instance of Mutex
assigns one of two logical meanings to the handle state: owned or unowned. A mutex can be owned by exactly one thread at a time. To own a mutex, a thread must call one of the wait methods of its base class, WaitHandle
. If the mutex instance is unowned, the thread gets ownership. If another thread owns the mutex, the thread is blocked until the mutex is released. Once the thread that owns the mutex is done with the resource, it calls the ReleaseMutex()
method to set the mutex state to unowned, thus allowing other threads access to the resource associated with and protected by the mutex. Only the current owner of the mutex can call ReleaseMutex()
. If a different thread tries to release the mutex, .NET throws an exception of type ApplicationException
(although the exception type SynchronizationLockException
exception would probably have been a more consistent choice).
If more than one thread tries to acquire the mutex, the pending callers are placed in a queue and served one at a time, in order. If the thread that currently owns the mutex tries to acquire it by making additional waiting calls, it isn’t blocked, but it should make a matching number of calls to ReleaseMutex()
. If the thread that owns the mutex terminates without releasing it, the mutex is considered abandoned, and .NET releases the mutex automatically. If another thread tries to wait for an abandoned mutex, .NET throws a MutexAbandoned
exception.
To ensure that ReleaseMutex()
is always called, place the call in a finally
statement.
The default constructor of the Mutex
class creates the mutex in the unowned state. If the creating thread wants to own the mutex, the thread must wait for the mutex first. The three other parameterized, overloaded versions of the constructor accept the bool initiallyOwned
flag, letting you explicitly set the initial state of the mutex. Setting initiallyOwned
to true
grants the creating thread ownership of the mutex.
Example 8-8 demonstrates using a mutex to provide a thread-safe string property. The mutex is created in the class constructor. The set
and get
accessors acquire the mutex before setting or reading the actual member variable and then release the mutex. Note the use of a temporary variable in the set
, so you can release the mutex before returning. Also note the call to Close()
in the Dispose()
method.
Example 8-8. Using a mutex
public class MutexDemo : IDisposable { Mutex m_Mutex; string m_MyString; public MutexDemo() { m_Mutex = new Mutex(); } public string MyString { set { m_Mutex.WaitOne(); m_MyString = value; m_Mutex.ReleaseMutex(); } get { m_Mutex.WaitOne(); string temp = m_MyString; m_Mutex.ReleaseMutex(); return temp; } } public void Dispose() { m_Mutex.Close(); } }
The Mutex
parameterized constructors also allow you to specify a mutex name in the name
parameter. The mutex name is any identifying string, such as “My Mutex”. By default, a mutex has no name, and you can pass in a null
value to create an explicitly nameless mutex
. If you do provide a name, any thread on the machine, including threads in other app domains or processes, can try to access the named mutex. When you specify a name, the operating system will check whether somebody else has already created a mutex with that name, and if so will give the creating thread a local object representing the global mutex. A named mutex allows for cross-process and cross-app-domain communication. If you try to create a named mutex, you should pass false
for initiallyOwned
, because even if the mutex is already owned by another thread in a different process, .NET will not block your thread. The other option is to call the constructor version that accepts the out bool createdNew
parameter, which will let you know whether you succeeded in owning the named mutex.
If you want to bind to an existing named mutex, instead of trying to create a new mutex with the same name (which may simply create a new mutex for you if no such global named mutex is available), you can use the OpenExisting()
static method of the Mutex
class:
public static Mutex OpenExisting(string name);
If any process on the machine has already created a mutex with the specified name, OpenExisting()
returns a Mutex
object referencing that named mutex. If there is no existing mutex with that name, OpenExisting()
throws an exception of type WaitHandleCannotBeOpenedException
.
A named mutex has a number of liabilities. First, it couples the applications using it, because they have to know the mutex name in advance. Second, in general it is a bad idea to have publicly exposed locks. If the mutex is supposed to protect some global resource, you should encapsulate the mutex in the resource. This decouples the client applications and is more robust, because you do not depend on the client applications being disciplined enough to try to acquire the mutex before accessing the resource. I consider named mutexes to be a thing of the past, a relic from Windows that you should use only when porting legacy code that took advantage of them to .NET. (This same reservation holds for named events and named semaphores, which are discussed later.)
There is, however, a surprisingly useful technique that relies on a named mutex, not necessarily to synchronize access to a system-wide resource, but rather as a crude but simple and easy-to-use cross-process communications mechanism that facilitates a singleton application. A singleton application is an application that has only a single instance of it running at any moment in time (similar to Microsoft Outlook), regardless of how many times the user tries to launch the application. Singleton applications are often rich-client Windows Forms applications, although you could have a middle-tier singleton (as discussed in Chapter 10). Example 8-9 shows the SingletonApp
static class, which is used exactly as a normal, multi-instance Windows Forms Application
class, except it makes sure the application is a singleton:
static class Program
{
static void Main()
{
SingletonApp.Run(new MyForm());
}
}
SingletonApp
is defined in the class library WinFormsEx.dll and is available in the source code accompanying this book.
Example 8-9. The SingletonApp class
public static class SingletonApp { static Mutex m_Mutex; public static void Run(Form mainForm) { bool first = IsFirstInstance(); if(first) { Application.ApplicationExit += OnExit; Application.Run(mainForm); } } static bool IsFirstInstance() { Assembly assembly = Assembly.GetEntryAssembly(); string name = assembly.FullName; m_Mutex = new Mutex(false,name); bool owned = false; owned = m_Mutex.WaitOne(TimeSpan.Zero,false); return owned ; } static void OnExit(object sender,EventArgs args) { m_Mutex.ReleaseMutex(); m_Mutex.Close(); } //Other overloaded versions of Run() }
If no other instance of the application is running, the form MyForm
is displayed. If there is another instance running, the application exits. The SingletonApp
class provides the same Run()
methods as the Application
class.
Implementing a singleton application requires some sort of cross-process communications mechanism, because if the user launches a new instance of the application it will be in a new process, yet it must detect if another process hosting the same application is already running. To this end, the SingletonApp
class uses the helper method IsFirstInstance()
:
static bool IsFirstInstance() { Assembly assembly = Assembly.GetEntryAssembly(); string name = assembly.FullName; m_Mutex = new Mutex(false,name); bool owned = false; owned = m_Mutex.WaitOne(TimeSpan.Zero,false); return owned ; }
IsFirstInstance()
creates a named mutex, and then proceeds to wait on it for a timeout of zero. The mutex must have a unique name, to avoid collisions with other named mutexes that might exist in the system. However, if SingletonApp
were to use a hardcoded unique name (such as a GUID), two different applications that used SingletonApp
on the same machine at the same time would collide with each other. The solution is to use the name of the EXE application assembly that loaded the class library containing SingletonApp
. The entry assembly name should be unique enough, and if the entry assembly is signed, it will also contain a token of the public key to guarantee uniqueness.
If this is the only instance of the application running, the mutex is unowned, and the wait operation returns immediately, indicating ownership. If another instance is running, the wait operation returns false
. Internally, SingletonApp
uses a normal Application
class. If it is the only instance running, it will delegate to that class the actual implementation of Run()
. Otherwise, it will simply ignore the request to run a new form. When the singleton application shuts down, it must release the mutex to allow a new instance to run. Fortunately, the Application
class provides the ApplicationExit
event, which signals that the application is shutting down. SingletonApp
subscribes to it, and in its handling of the application exit event it releases the named mutex and closes it.
The class EventWaitHandle
derives from WaitHandle
and is used to signal an event across threads. The EventWaitHandle
class is defined as:
public class EventWaitHandle : WaitHandle
{
public EventWaitHandle(bool initialState,EventResetMode mode);
public EventWaitHandle(bool initialState,EventResetMode mode,string name);
public EventWaitHandle(bool initialState,EventResetMode mode,string name,
ref bool createdNew);
public static EventWaitHandle OpenExisting(string name);
public bool Reset();
public bool Set();
//Security members
}
An EventWaitHandle
object can be in two states: signaled and non-signaled. The Set()
and Reset()
methods set the state of the handle to signaled or non-signaled, respectively. The term waitable event is used because if the state of the handle is non-signaled, any thread that calls one of the wait methods of the base class WaitHandle
is blocked until the handle becomes signaled. The constructors of EventWaitHandle
all accept the initialState
flag. Constructing an event with initialState
set to true
creates the event in a signaled state, while false
constructs it in a non-signaled state.
A waitable event can be named or unnamed, similar to a mutex. A named event can be shared across processes and can be used in some legacy porting or cross-process communication scenarios. To construct a named event, you need to use one of the EventWaitHandle
constructors that take the string name
parameter. The name
construction parameter functions in an identical manner to the corresponding parameter for specifying a mutex name. EventWaitHandle
also offers the OpenExisting()
method, which provides the same functionality discussed previously (see "Named mutexes“).
EventWaitHandle
comes in two flavors: manual-reset and auto-reset. You control which type you require by providing its constructor with an enum of type EventResetMode
, defined as:
public enum EventResetMode { AutoReset, ManualReset }
To automate the selection, .NET provides two strongly typed subclasses of EventWaitHandle
, whose entire definitions and implementations are as follows:
public class ManualResetEvent : EventWaitHandle { public ManualResetEvent(bool initialState) : base(initialState,EventResetMode.ManualReset) {} } public class AutoResetEvent : EventWaitHandle { public AutoResetEvent(bool initialState) : base(initialState,EventResetMode.AutoReset) {} }
As you can see, all the ManualResetEvent
and AutoResetEvent
classes do is provide EventWaitHandle
with the matching value of EventResetMode
. The main difference between working with the strongly typed subclasses or with EventWaitHandle
directly is that the subclasses do not offer a named version.
Using ManualResetEvent
(or EventWaitHandle
directly) is straightforward. First, you construct a new event object that multiple threads can access. When one thread needs to block until some event takes place, it calls one of ManualResetEvent
’s base class’s (i.e., WaitHandle
’s) waiting methods. When another thread wants to signal the event, it calls the Set()
method. Once the state of the ManualResetEvent
object is signaled, it stays signaled until some thread explicitly calls the Reset()
method. This is why it’s called a manual-reset event. Note that while the event state is set to signaled, all waiting threads are unblocked. Example 8-10 demonstrates using a ManualResetEvent
object.
Example 8-10. Using ManualResetEvent
public class EventDemo : IDisposable { ManualResetEvent m_Event; public EventDemo() { m_Event = new ManualResetEvent(false);//Created unsignaled Thread thread = new Thread(DoWork); thread.Start(); } void DoWork() { int counter = 0; while(true) { m_Event.WaitOne(); counter++; Trace.WriteLine("Iteration # "+ counter); } } public void GoThread() { m_Event.Set(); Trace.WriteLine("Go Thread!"); } public void StopThread() { m_Event.Reset(); Trace.WriteLine("Stop Thread!"); } public void Dispose() { m_Event.Close(); } }
In this example, the class EventDemo
creates a new thread whose thread method DoWork()
traces a counter value in a loop to the Output window. However, DoWork()
doesn’t start tracing right away—before every loop iteration, it waits for the m_Event
member of type ManualResetEvent
to be signaled. Signaling is done by calling the GoThread()
method, which simply calls the Set()
method of the event. Once the event is signaled, DoWork()
keeps tracing the counter until the StopThread()
method is called. StopThread()
blocks the thread by calling the Reset()
method of the event. You can start and stop the thread execution as many times as you like using the event. A possible output of Example 8-10 might be:
Go Thread! Iteration # 1 Iteration # 2 Iteration # 3 Stop Thread! Go Thread! Iteration # 4 Iteration # 5 Stop Thread!
Note that the class EventDemo
closes the event object in its Dispose()
method. Dispose()
should also signal the thread to terminate, as demonstrated later.
An auto-reset event is identical to a manual-reset event, with one important difference: once the state of the event is set to signaled, it remains so until a single thread is released from its waiting call, at which point the state reverts automatically to non-signaled. If multiple threads are waiting for the event, they are placed in a queue and are taken out of that queue in order. If no threads are waiting when Set()
is called, the state of the event remains signaled.
Note that if multiple threads are waiting for the handle to become signaled, there is no way to release just one of them using a manual-reset event. The reason for this is that there could be a context switch between the call to Set()
and Reset()
, and multiple threads could be unblocked. The auto-reset event combines the Set()
and Reset()
calls into a single atomic operation.
If the event in Example 8-10 were an auto-reset event instead of a manual-reset event, a possible output might be:
Go Thread! Iteration # 1 Go Thread! Iteration # 2 Go Thread! Iteration # 3 Go Thread! Iteration # 4
The worker thread traces a single iteration at a time and blocks between iterations until the next call to GoThread()
. There would be no need to call StopThread()
.
Both EventWaitHandle
and the Monitor
class allow one thread to wait for an event and another thread to signal (or pulse) the event. The main difference between the two mechanisms is that after a monitor is pulsed, it has no recollection of the action. Even if no thread is waiting for the monitor when it’s pulsed, the next thread to wait for the monitor will be blocked. Events, on the other hand, have “memory”; their state remains signaled. A manual-reset event remains signaled until it is reset explicitly, and an auto-reset event remains signaled until a thread waits on it or until it is reset explicitly.
The rendezvous problem is a classic problem of multithreaded applications: how can you safely have two threads execute some code, then have the threads wait for one another at some virtual meeting point, and then continue their execution simultaneously after the conjunction?
Example 8-11 lists the Rendezvous
helper class.
Example 8-11. The Rendezvous helper class
public class Rendezvous { AutoResetEvent m_First = new AutoResetEvent(true); AutoResetEvent m_Event1 = new AutoResetEvent(false); AutoResetEvent m_Event2 = new AutoResetEvent(false); public void Wait() { bool first = m_First.WaitOne(TimeSpan.Zero,false); if(first) { WaitHandle.SignalAndWait(m_Event1,m_Event2); } else { WaitHandle.SignalAndWait(m_Event2,m_Event1); } } public void Reset() { m_First.Set(); } }
Example 8-12 demonstrates using the Rendezvous
class.
Example 8-12. Using the Rendezvous class
public class RendezvousDemo { Rendezvous m_Rendezvous = new Rendezvous(); public void ThreadMethod1() { //Do some work, then m_Rendezvous.Wait(); //Continue executing } public void ThreadMethod2() { //Do some work, then m_Rendezvous.Wait(); //Continue executing } } RendezvousDemo demo = new RendezvousDemo(); Thread thread1 = new Thread(demo.ThreadMethod1); thread1.Start(); Thread thread2 = new Thread(demo.ThreadMethod2); thread2.Start();
The RendezvousDemo
class in Example 8-12 has two methods, used as the thread methods for two different threads. When the two threads execute their respective thread methods, they do so concurrently. At some point, each thread calls the Wait()
method of the Rendezvous
class. Whichever thread calls Wait()
first is blocked until the second thread calls Wait()
. Once the second thread calls Wait()
, both threads automatically resume execution.
Implementing Rendezvous
relies on the SignalAndWait()
method of the WaitHandle
class. Internally, Rendezvous
defines two auto-reset events: m_Event1
and m_Event2
. When the Wait()
method is called on one thread, it signals m_Event1
and waits for m_Event2
to become signaled. When the Wait()
method is called on the other thread, it does the opposite: it signals m_Event2
and waits for m_Event1
to become signaled. Note that signaling and waiting is done as one atomic operation, thanks to SignalAndWait()
. This assures adherence with the rendezvous requirement that both threads wait for one another and then continue execution simultaneously. The only remaining question is how Rendezvous
knows which thread is the first thread to call Wait()
. To address that issue Rendezvous
has another auto-reset event, called m_First
, which is initialized in the signaled state. The first thing Wait()
does is verify the status of m_First
, without blocking. It does this by waiting on it with a timeout of zero:
bool first = m_First.WaitOne(TimeSpan.Zero,false);
The zero timeout assigns the returned value of WaitOne()
the semantic of being the first to call it. Since m_First
is an auto-reset event, you are guaranteed that only one thread can reach the conclusion that it is first—the WaitOne()
call will atomically set its state to non-signaled, and the second thread to call it will get back false
without blocking. Had m_First
been a manual-reset event, Rendezvous
would have been susceptible to a race condition of the first thread checking the status of m_First
but being switched out before acting upon it, and having the second thread think that it is the first thread too. Once Wait()
knows whether it is called on the first or second thread, it knows also the order in which to pass the events to SignalAndWait()
.
Rendezvous
also offers the Reset()
method—you must call Reset()
before the two threads try to rendezvous again, in order to reset the value of m_First
.
Semaphore
is another WaitHandle
-derived class, introduced by .NET 2.0. The definition of the Semaphore
class is:
public sealed class Semaphore : WaitHandle { public Semaphore(int initialCount, int maximumCount); public Semaphore(int initialCount, int maximumCount,string name); public Semaphore(int initialCount, int maximumCount,string name, ref bool createdNew); public static Semaphore OpenExisting(string name); public int Release(); public int Release(int releaseCount); //Security members }
The Semaphore
class assigns a signaled or non-signaled semantic to the state of the WaitHandle
base class. You could use a semaphore to synchronize access to a resource, similar to using a mutex, but semaphores are actually designed for cases where you need to control the execution of threads in some countable manner—that is, where instead of signaling a thread to just execute or stop, you want it to perform a number of operations, such as retrieving the next three messages from a message queue. Internally, the semaphore maintains a counter. As long as the counter’s value is a positive number, the handle is considered signaled, and therefore any thread that calls one of the wait methods of the Semaphore
base class will not be blocked. When the counter is zero, the handle is considered non-signaled, and all threads will be blocked until the counter is incremented.
The Semaphore
class’s constructors all take two values. The initialCount
argument indicates which value to initialize the counter to, and the maximumCount
argument indicates the highest allowed value for the internal counter. initialCount
can be zero or greater, and maximumCount
must be one or greater. Of course, initialCount
must be less than or equal to maximumCount
, or an ArgumentException
is raised. If the counter’s value is a positive number and a thread calls one of the waiting methods, the thread is not blocked and the counter is decremented by one. When the counter reaches zero, all waiting threads are blocked. You can increment the value of the counter by one by calling the Release()
method. You can also increment the counter by any value by calling the version of Release()
that accepts an integer. If calling one of the Release()
methods causes the counter’s value to exceed the maximum counter value set during construction, Release()
throws an exception of type SemaphoreFullException
.
Like Mutex
and EventWaitHandle
, Semaphore
offers constructor versions that accept a string name
parameter, and these constructors function in a similar manner to those of Mutex
and EventWaitHandle
. Semaphore
also supports the OpenExisting()
static method, which offers the same functionality discussed previously (under "Named mutexes“).
A semaphore with a maximum counter value of one is equivalent to a mutex. Mutexes are sometimes referred to as binary semaphores.
Example 8-13 demonstrates using a semaphore.
Example 8-13. Using a semaphore
public class SemaphoreDemo : IDisposable { Semaphore m_Semaphore; public SemaphoreDemo() { //Create semaphore with initial counter of 0, max value of 5 m_Semaphore = new Semaphore(0,5); Thread thread = new Thread(Run); thread.Start(); } void Run() { int counter = 0; while(true) { m_Semaphore.WaitOne(); counter++; Trace.WriteLine("Iteration # " + counter); } } public void DoIterations(int iterations) { m_Semaphore.Release(iterations); } public void Dispose() { m_Semaphore.Close(); } }
The class SemaphoreDemo
contains the m_Semaphore
member variable. In this example, m_Semaphore
is initialized with a value of zero and has a maximum allowed value of five. SemaphoreDemo
launches a thread with the Run()
method for a thread method. As long as the internal counter value of m_Semaphore
is a positive number, Run()
will trace to the Output window the number of iterations it performs. The DoIterations()
method increments m_Semaphore
by the specified number of iterations, which will enable Run()
to trace just as many iterations. As a result, the following client-side code:
SemaphoreDemo demo = new SemaphoreDemo(); demo.DoIterations(3);
will result in this output:
Iteration # 1 Iteration # 2 Iteration # 3
If all you want to do is increment or decrement a value, or exchange or compare values, there is a more efficient mechanism than using one or more mutexes: .NET provides the Interlocked
class to address this need. Interlocked
supports a set of static methods that access a variable in an atomic manner:
public static class Interlocked { public static int Increment(ref int location); public static int Add(ref int location,int value); public static int Decrement(ref int location); public static int CompareExchange(ref int location,int value,int comparand); public static object CompareExchange(ref object location,object value, object comparand); public static T CompareExchange<T>(ref T location1, T value, T comparand) where T : class; public static object Exchange(ref object location, object value); public static int Exchange(ref int location, int value); public static T Exchange<T>(ref T location1, T value) where T : class; public static long Read(ref long location); //Additional methods for double, float, long, IntPtr }
For example, here is how to use the Interlocked
class for incrementing an integer as a thread-safe operation, returning the new value:
int i = 8; int newValue = Interlocked.Increment(ref i); Debug.Assert(i == 9); Debug.Assert(newValue == 9);
Interlocked
provides overloaded methods for more complex operations: Exchange()
assigns a value from one variable to another, and CompareExchange()
compares two variables and, if they are equal, assigns a new value to one of them. Like Increment()
and Decrement()
, these two complex operations have overloaded methods for common primitive types, but also for generic type parameters and object variables. Interlocked
is especially useful when both read and write operations are needed.
Consider a member variable or a property that is frequently read and written by multiple threads. Locking the whole object using a Monitor
object is inefficient, because a macro lock will lock all member variables. Using a mutex dedicated to that member variable (as in Example 8-8) is also inefficient, because if no thread is writing a new value, multiple threads can concurrently read the current value. This common pattern is called multiple readers/single writer, because you can have many threads reading a value but only one thread writing to it at a time. While a write operation is in progress, no thread should be allowed to read. .NET provides a lock designed to address the multiple readers/single writer situation, called ReaderWriterLock
. The ReaderWriterLock
class is defined as:
public sealed class ReaderWriterLock { public ReaderWriterLock(); public bool IsReaderLockHeld{ get; } public bool IsWriterLockHeld{ get; } public int WriterSeqNum{ get; } public void AcquireReaderLock(int millisecondsTimeout); public void AcquireReaderLock(TimeSpan timeout); public void AcquireWriterLock(int millisecondsTimeout); public void AcquireWriterLock(TimeSpan timeout); public bool AnyWritersSince(int seqNum); public void DowngradeFromWriterLock(ref LockCookie lockCookie); public LockCookie ReleaseLock(); public void ReleaseReaderLock(); public void ReleaseWriterLock(); public void RestoreLock(ref LockCookie lockCookie); public LockCookie UpgradeToWriterLock(int millisecondsTimeout); public LockCookie UpgradeToWriterLock(TimeSpan timeout); }
You use the AcquireReaderLock()
method to acquire a lock for reading a value and AcquireWriterLock()
to acquire a lock for writing a value. Once you are done reading or writing, you need to call ReleaseReaderLock()
or ReleaseWriterLock()
, respectively. The ReaderWriterLock
keeps track of the threads owning it and applies the multiple readers/single writer semantics: if no thread calls AcquireWriterLock()
, every thread that calls AcquireReaderLock()
isn’t blocked and is allowed to access the resource. If a thread calls AcquireWriterLock()
, ReaderWriterLock
blocks the caller until all the currently reading threads call ReleaseReaderLock()
.ReaderWriterLock
then blocks any further calls to AcquireReaderLock()
and AcquireWriterLock()
, and grants the writing thread access.
Pending writing threads are placed in a queue and are served in order, one by one. In effect, ReaderWriterLock
serializes all writing threads, allowing access one at a time, but it allows concurrent reading accesses. As you can see, ReaderWriterLock
is a throughput-oriented lock. ReaderWriterLock
provides overloaded methods that accept a timeout for acquiring the lock, the UpgradeToWriterLock()
method to upgrade a reader lock to a writer lock (e.g., for when, based on the information read, you may need to write something instead), and the DowngradeFromWriterLock()
method to convert a write request in progress to a read request. ReaderWriterLock
also automatically handles nested lock requests by readers and writers.
Example 8-14 demonstrates a typical case that uses ReaderWriterLock
to provide multiple readers/single writer semantics to a property. The constructor creates a new instance of ReaderWriterLock
. The get
accessor calls AcquireReaderLock()
before reading the member variable and ReleaseReaderLock()
after reading it. The set
accessor calls AcquireWriterLock()
before assigning a new value to the member variable and ReleaseWriterLock()
after updating the property with the new value.
Example 8-14. Using ReaderWriterLock
public class MyClass { string m_MyString; ReaderWriterLock m_RWLock; public MyClass() { m_RWLock = new ReaderWriterLock(); } public string MyString { set { m_RWLock.AcquireWriterLock(Timeout.Infinite); m_MyString = value; m_RWLock.ReleaseWriterLock(); } get { m_RWLock.AcquireReaderLock(Timeout.Infinite); string temp = m_MyString; m_RWLock.ReleaseReaderLock(); return temp; } } }