Chapter 5. COM+ Concurrency Model

Employing multiple threads of execution in your application opens the way for many benefits impossible to achieve using just a single thread. These benefits include:

Responsive user interface

Your application can process user requests (such as printing or connecting to a remote machine) on a different thread than that of the user interface. If it were done on the same thread, the user interface would appear to hang until the other requests were processed. Because the user interface is on a different thread, it can continue to respond to the user’s request.

Enhanced performance

If the machine your application runs on has multiple CPUs and the application is required to perform multiple calculation-intensive independent operations, the only way to use the extra processing power is to execute the operations on different threads.

Increased throughput

If your application is required to process incoming client requests as fast at it can, you often spin off a number of worker threads to handle requests in parallel.

Asynchronous method calls

Instead of blocking the client while the object processes the client request, the object can delegate the work to another thread and return control to the client immediately.

In general, whenever you have two or more operations that can take place in parallel and are different in nature, using multithreading can bring significant gains to your application.

The problem is that introducing multithreading to your application opens up a can of worms. You have to worry about threads deadlocking themselves while contesting for the same resources, synchronize access to objects by concurrent multiple threads, and be prepared to handle object method re-entrancy. Multithreading bugs and defects are notoriously hard to detect, reproduce, and eliminate. They often involve rare race conditions (in which multiple threads write and read shared data without appropriate access synchronization), and fixing one problem often introduces another.

Writing robust, high performance multithreading object-oriented code is no trivial matter. It requires a great deal of skill and discipline on behalf of the developers.

Clearly there is a need to provide some concurrency management service to your components so you can focus on the business problem at hand, instead of on multithreading synchronization issues. The classic COM concurrency management model addresses the problems of developing multithreaded object-oriented applications. However, the classic COM solution has its own set of deficiencies.

COM+ concurrency management service addresses the problems with the classic COM solution. It also provides you with administrative support for the service via the Component Services Explorer.

This chapter first briefly examines the way classic COM solves concurrency and synchronization problems in classic object-oriented programming, and then introduces the COM+ concurrency management model, showing how it improves classic COM concurrency management. The chapter ends by describing a new Windows 2000 threading model, the neutral threaded apartment, and how it relates to COM+ components.

Object-Oriented Programming and Multiple Threads

The classic COM threading model was designed to address the set of problems inherent with objects executing in different threads. Consider, for example, the situation depicted in Figure 5-1. Under classic object-oriented programming, two objects on different threads that want to interact with each other have to worry about synchronization and concurrency.

Objects executing on two different threads

Figure 5-1. Objects executing on two different threads

Object 1 resides in Thread A and Object 2 resides in Thread B. Suppose that Object 1 wants to invoke a method of Object 2, and that method, for whatever reason, must run in the context of Thread B. The problem is that, even if Object 1 has a pointer to Object 2, it is useless. If Object 1 uses such a pointer to invoke the call, the method executes in the context of Thread A.

This behavior is the direct result of the implementation language used to code the objects. Programming languages such as C++ are completely thread-oblivious—there is nothing in the language itself to denote a specific execution context, such as a thread. If you have a pointer to an object and you invoke a method of that object, the compiler places the method’s parameters and return address on the calling thread’s stack—in this case, Thread A’s stack. That does not have the intended effect of executing the call in the context of Thread B. With a direct call, knowledge that the method should have executed on another thread remains in the design document, on the whiteboard, or in the mind of the programmer.

The classic object-oriented programming (OOP) solution is to post or send a message to Thread B. Thread B would process the message, invoke the method on Object 2, and signal Thread A when it finished. Meanwhile, Object 1 would have had to block itself and wait for a signal or event from Object 2 signifying that the method has completed execution.

This solution has several disadvantages: you have to handcraft the mechanism, the likelihood of mistakes (resulting in a deadlock) is high, and you are forced to do it over and over again every time you have objects on multiple threads.

The more acute problem is that the OOP solution introduces tight coupling between the two objects and the synchronization mechanism. The code in the two objects has to be aware of their execution contexts, of the way to post messages between objects, of how to signal events, and so on. One of the core principals of OOP, encapsulation or information hiding, is violated; as a result, maintenance of classic multithreaded object-oriented programs is hard, expensive, and error-prone.

That is not all. When developers started developing components (packaging objects in binary units, such as DLLs), a classic problem in distributed computing raised its head. The idea behind component-oriented development is building systems out of well-encapsulated binary entities, which you can plug or unplug at will like Lego bricks. With component-oriented development, you gain modularity, extensibility, maintainability, and reusability. Developers and system designers wanted to get away from monolithic object-oriented applications to a collection of interacting binary components. Figure 5-2 shows a product that consists of components.

The application is constructed from a set of components that interact with one another. Each component was implemented by an independent vendor or team. However, what should be done about the synchronization requirements of the components? What happens if Components 3 and 1 try to access Component 2 at the same time? Could Component 2 handle it? Will it crash? Will Component 1 or Component 3 be blocked? What effect would that have on Component 4 or 5? Because Component 2 was developed as a standalone component, its developer could not possibly know what the specific runtime environment for the components would be. With that lack of knowledge, many questions arise. Should the component be defensive and protect itself from multiple threads accessing it? How can it participate in an application-wide synchronization mechanism that may be in place? Perhaps Component 2 will never be accessed simultaneously by two threads in this application; however, Component 2’s developer cannot know this in advance, so it may choose to always protect the component, taking an unnecessary performance hit in many cases for the sake of avoiding deadlocks.

Objects packaged in binary units have no way of knowing about the synchronization needs of other objects in other units

Figure 5-2.  Objects packaged in binary units have no way of knowing about the synchronization needs of other objects in other units

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset