This chapter will continue walking through the framework implementation by focusing on how CSLA .NET provides support for mobile objects. Chapter 1 introduced the concept of mobile objects, including the idea that in an ideal world, business logic would be available both on the client workstation (or web server) and on the application server. In this chapter, the implementation of data access is designed specifically to leverage the concept of mobile objects by enabling objects to move between client and server. When on the client, all the data binding and UI support from Chapters 7 to 13 is available to a UI developer; and when on the server, the objects can be persisted to a database or other data store.
Chapter 2 discussed the idea of a data portal. The data portal combines the channel adapter and message router design patterns to provide a simple, clearly defined point of entry to the server for all data access operations. In fact, the data portal entirely hides whether an application server is involved, allowing an application to switch between 2-tier and 3-tier physical deployments without changing any code.
A UI developer is entirely unaware of the use of a data portal. Instead, the UI developer will interact only with the business objects created by the business developer.
The business developer will make use of the DataPortal
class from the Csla
namespace to create, retrieve, update, and delete all business object data. This DataPortal
class is the single entry point to the entire data portal infrastructure, which enables mobile objects and provides access to server-side resources such as distributed transaction support. The key features enabled by the data portal infrastructure include
Enabling mobile objects
Providing a consistent coding model for root and child objects
Hiding the network transport (channel adapter)
Exposing a single point of entry to the server (message router)
Exposing server-side resources (database engine, distributed transactions, etc.)
Allowing objects to persist themselves or to use an external persistence model
Unifying context (passing context data to/from client and server)
Using Windows integrated (AD) security
Using CSLA .NET custom authentication (including impersonation)
Meeting all those needs means that the data portal is a complex entity. While to a business developer, it appears to consist only of the simple DataPortal
class, there's actually a lot going on behind that class.
I've already discussed much of the authentication and authorization support in Chapter 12, but I will discuss some aspects of authentication in this chapter as well. Because the data portal is so complex, I'll spend some time discussing its design before walking through the implementation details.
One of the primary goals of object-oriented programming is to encapsulate all the functionality (data and implementation) for a domain object into a single class. This means, for instance, that all the business logic responsible for editing customer information should be in a CustomerEdit
class.
In many cases, the business logic in an object directly supports a rich, interactive user experience. This is especially true for WPF or Windows Forms applications, in which the business object implements validation and calculation logic that should be run as the user enters values into each field on a form. To achieve this, the objects should be running on the client workstation or web server to be as close to the user as possible.
At the same time, most applications have back-end processing that is not interactive. In an n-tier deployment, this non-interactive business logic should run on an application server. Yet good object-oriented design dictates that all business logic should be encapsulated within objects rather than spread across the application. This can be challenging when an object needs to both interact with the user and perform back-end processing. Effectively, the object needs to be on the client sometimes and on the application server other times.
The idea of mobile objects solves this problem by allowing an object to physically move from one machine to another. This means it really is possible to have an object run on the client to interact with the user, then move to the application server to do back-end work like interacting with the database.
A key goal of the data portal is to enable the concept of mobile objects. In the end, not only will objects be able to go to the application server to persist their data to a database, but they will also be able to handle any other non-interactive back-end business behaviors that should run on the application server.
At the same time, as discussed in Chapter 1, it is important to maintain clear logical layers in the application architecture. This means that the business logic and data access should be logically separated.
The term logically separated can mean many things. You might consider logical separation to include putting all your data access code into a limited set of predefined methods, and I think that is perfectly valid. Or you might consider logical separation to mean putting the data access code into a separate class or a separate assembly. To me, those are valid as well.
The data portal enables several techniques for logical separation:
Data access code goes into a limited set of predefined methods.
Data access code goes into a separate data access class (and optionally separate assembly), invoked by the business object.
Data access code goes into a separate object factory class (and optionally separate assembly), invoked by the data portal.
All of these techniques provide logical separation of layers, though the latter two offer the psychological benefit of putting the code into a separate class for clarity.
In Chapters 4 and 5, I walked through the coding model for the various stereotypes directly supported by CSLA .NET. As you can see from the code templates in Chapter 5, the data portal is used to create, retrieve, update, and delete both root and child objects, using a relatively consistent coding pattern for both.
A root object typically implements a set of DataPortal_XYZ
methods such as DataPortal_Create()
and DataPortal_Fetch()
. These methods are invoked by the data portal in response to the business object's factory method calling DataPortal.Create()
and DataPortal.Fetch()
.
Similarly, a child object typically implements a set of Child_XYZ
methods such as Child_Create()
and Child_Fetch()
. These methods are invoked by the data portal in response to the child object's factory method calling DataPortal.CreateChild()
and DataPortal.FetchChild()
.
Additionally, the field manager discussed in Chapter 7 plays a role in the process. In a parent business object's DataPortal_Insert()
or DataPortal_Update()
method, the object must update its child objects as well. This can be done in a single line of code.
FieldManager.UpdateChildren();
The field manager loops through all child references and has the data portal update each one by calling the appropriate Child_XYZ
method based on the state of the child object.
BusinessListBase
also participates by providing a prebuilt Child_Update()
implementation that updates the collection's list of deleted items and active items. In fact, this method is useful even for a root collection, because the collection's DataPortal_Update()
can look like this:
protected override void DataPortal_Update() { // open database connection Child_Update(); // close database connection }
But what's even nicer is that for a child collection, the business developer typically has to write no code at all in the collection. The data portal handles updating a child collection automatically as long as the child objects implement Child_XYZ
methods.
The data portal combines two common design patterns: channel adapter and message router.
The channel adapter pattern provides a great deal of flexibility for n-tier applications by allowing the application to switch between 2-tier and 3-tier models, as well as between various network protocols.
The message router pattern helps to decouple the client and server by providing a clearly defined, single point of entry for all server interaction. Each call to the server is routed to an appropriate server-side object.
Channel Adapter
Chapter 1 discussed the costs and benefits of physical n-tier deployments. Ideally, an application will use as few physical tiers as possible. At the same time, it is good to have the flexibility to switch from a 2-tier to a 3-tier model, if needed, to meet future scalability or security requirements.
Switching to a 3-tier model means that there's now a network connection between the client (or web server) and the application server. The primary technology provided by .NET for such communication is WCF. However, this is just one (though the recommended one) of several options:
WCF
Remoting
ASP.NET Web Services (ASMX)
Enterprise Services (DCOM)
To avoid being locked into a single network communication technology, the data portal applies the channel adapter design pattern.
The channel adapter pattern allows the specific network technology to be changed through configuration rather than through code. A side effect of the implementation shown in this chapter is that no network is also an option. Thus, the data portal provides support for 2-tier or 3-tier deployment. In the 3-tier case, it supports various network technologies, all of which are configurable without changing any code.
Figure 15-1 illustrates the flow of a client call as it flows through the data portal.
Switching from one channel to another is done by changing a configuration file, not by changing code. Notice that the LocalProxy
channel communicates directly with the DataPortal
object (from the Csla.Server
namespace) on the right. This is because it bypasses the network entirely, interacting with the object in memory on the client. All the other channel proxies use network communication to interact with the server-side object.
The data portal also allows you to create your own proxy/host combination so you can support network channels other than those implemented in this chapter.
Table 15-1 lists the types required to implement the channel adapter portion of the data portal.
Table 15.1. Types Required for the Channel Adapter Pattern
Type | Namespace | Description |
---|---|---|
|
| Utility class that encapsulates the use of reflection and dynamic method invocation to find method information and invoke methods |
|
| Exception thrown by the data portal when an exception occurs while calling a data access method |
|
| Attribute applied to a business object's data access methods to force the data portal to always run that method on the client, bypassing the configuration settings |
|
|
|
|
| Primary entry point to the data portal infrastructure; used by business developers |
|
| Primary entry point to the data portal for asynchronous behaviors; used by business developers |
|
| Portal to the message router functionality on the server; acts as a single point of entry for all server communication |
|
| Interface defining the methods required for data portal host objects |
|
| Interface defining the methods required for client-side data portal proxy objects |
|
| Uses WCF to communicate with a WCF server running in IIS, WAS, or a custom host (typically a Windows service) |
|
| Exposed on the server by IIS, WAS, or a custom host; called by |
|
| Loads the server-side data portal components directly into memory on the client and runs all "server-side" operations in the client process |
|
| Uses .NET Remoting to communi-cate with a remoting server running in IIS or within a custom host (typically a Windows service) |
|
| Exposed on the server by IIS or a custom host; called by |
|
| Uses Enterprise Services (DCOM) to communicate with a server running in COM+ |
|
| Exposed on the server by Enterprise Services; called by |
|
| Uses Web Services to communi-cate with a service hosted in IIS |
|
| Exposed on the server as a web service by IIS; called by |
The .NET Remoting, Web Services, and Enterprise Services technologies are supported primarily for backward compatibility with older versions of CSLA .NET. I recommend using the WCF technology, and that is the technology I'll focus on in this chapter.
The point of the channel adapter is to allow a client to use the data portal without having to worry about how that call will be relayed to the Csla.Server.DataPortal
object. Once the call makes it to the server-side DataPortal
object, the message router pattern becomes important.
Message Router
One important lesson to be learned from the days of COM and MTS/COM+ is that it isn't wise to expose large numbers of classes and methods from a server. When a server exposes dozens or even hundreds of objects, the client must be aware of all of them in order to function.
Sadly, this lesson doesn't seem to have informed the designs of many service-oriented systems. Fortunately, the representational state transfer (REST) movement has picked up on the idea of limiting the entry points to a server and is helping to shape the industry in this direction.
If the client is aware of every server-side object, we get tight coupling and fragility. Any change to the server objects typically changes the server's public API, thus breaking all of the clients, often including those clients who aren't even using the object that was changed.
One way to avoid this fragility is to add a layer of abstraction. Specifically, you can implement the server to have a single point of entry that exposes a limited number of methods. This keeps the server's API clear and concise, minimizing the need for a server API change. The data portal will expose only the five methods listed in Table 15-2.
Table 15.2. Methods Exposed by the Data Portal
Method | Purpose |
---|---|
| Creates a new object, loading it with default values from the database |
| Retrieves an existing object, first loading it with data from the database |
| Inserts, updates, or deletes data in the database corresponding to an existing object |
| Deletes data in the database corresponding to an existing object |
| Executes a command stereotype object on the server |
Of course, the next question is, with a single point of entry, how do your clients get at the dozens or hundreds of objects on the server? It isn't like they aren't needed! That is the purpose of the message router.
The single point of entry to the server routes all client calls to the appropriate server-side object. If you think of each client call as a message, then this component routes messages to your server-side objects. In CSLA .NET, the message router is Csla.Server.DataPortal
. Notice that it is also the endpoint for the channel adapter pattern discussed earlier; the data portal knits the two patterns together into a useful whole.
For Csla.Server.DataPortal
to do its work, all server-side objects must conform to a standard design so the message router knows how to invoke them. Remember, the message router merely routes messages to objects—it is the object that actually does useful work in response to the message.
Figure 15-2 illustrates the flow of a call through the message router implementation. The DataPortal
class (on the left of Figure 15-2) represents the Csla.Server.DataPortal
—which was the rightmost entity in Figure 15-1. It relies on a SimpleDataPortal
object to do the actual message routing—a fact that will become important shortly for support of distributed transactions.
The SimpleDataPortal
object routes each client call (message) to the actual business object that can handle the message. These are the same business classes and objects that make up the application's business logic layer.
In other words, the same exact objects used by the UI on the client are also called by the data portal on the server. This allows the objects to run on the client to interact with the user, and to run on the server to do back-end processing as needed.
The FactoryDataPortal
object routes each client call (message) to an object factory that can handle the message. If a business developer chooses to use this object factory approach, he'll need to create an object factory for each root business object, so that factory object can create, retrieve, update, and delete the business object and its data.
Table 15-3 lists the classes needed, in addition to Csla.DataPortal
and Csla.Server.DataPortal
, to implement the message router behavior.
Table 15.3. Types Required for the Message Router
Type | Namespace | Description |
---|---|---|
|
| Creates a |
|
| Creates a COM+ distributed transaction and then delegates the call to |
|
| Determines whether to invoke the managed data portal ( |
|
| Entry point to the server, implementing the message router behavior and routing client calls to the appropriate business object on the server |
|
| Entry point to the server, implementing the message router behavior and routing client calls to the appropriate object factory on the server |
|
| Interface that defines the required behavior for a non-nested criteria class |
|
| Optional base class for use when building criteria objects; criteria objects contain the criteria or key data needed to create, retrieve, or delete an object's data |
|
| Optional prebuilt criteria class that passes a single identifying value through the data portal for use in identifying the object to create, retrieve, or delete |
|
| Utility class that encapsulates the use of reflection and dynamic method invocation to find method information and to invoke methods |
Notice that neither the channel adapter nor message router explicitly deal with moving objects between the client and server. This is because the .NET runtime typically handles object movement automatically as long as the objects are marked as Serializable
or DataContract
.
The ICriteria
interface listed in Table 15-3 can be implemented by any criteria class, and must be implemented by any criteria class that is not nested inside its business class. For example, CriteriaBase
and SingleCriteria
implement this interface. During the implementation of SimpleDataPortal
later in the chapter, you'll see how this interface is used.
Several different technologies support database transactions, including transactions in the database itself, ADO.NET, Enterprise Services, and System.Transactions
. When updating a single database (even multiple tables), any of them will work fine, and your decision will often be based on which is fastest or easiest to implement.
If your application needs to update multiple databases, however, the options are a bit more restrictive. Transactions that protect updates across multiple databases are referred to as distributed transactions. In SQL Server, you can implement distributed transactions within stored procedures. Outside the database, you can use Enterprise Services or System.Transactions
.
Distributed transaction technologies use the Microsoft DTC to manage the transaction across multiple databases. There is a substantial performance cost to enlisting the DTC in a transaction. Your application, the DTC, and the database engine(s) all need to interact throughout the transactional process to ensure that a consistent commit or rollback occurs, and this interaction takes time.
Historically, you had to pick one transactional approach for your application. This often meant using distributed transactions even when they weren't required—and paying that performance cost.
The System.Transactions
namespace offers a compromise through the TransactionScope
object. It starts out using nondistributed transactions (like those used in ADO.NET), and thus offers high performance for most applications. However, as soon as your code uses a second database within a transaction, TransactionScope
automatically enlists the DTC to protect the transaction. This means you get the benefits of distributed transactions when you need them, but you don't pay the price for them when they aren't needed.
The TransactionScope
object is a little tricky, because it will enlist the DTC if more than one database connection is opened, even to the same exact database. Later in the chapter, I'll discuss the ConnectionManager, ObjectContextManager
, and ContextManager
classes, which are designed to help you avoid this issue.
The data portal allows the developer to specify which transactional technology to use for each of a business object's data access methods. The TransactionalDataPortal
and ServicedDataPortal
classes in Figure 15-2 wrap the client's call in a TransactionScope
or COM+ transactional context as requested.
The Csla.Server.DataPortal
object uses the Transactional
attribute to determine what type of transactional approach should be used for each call by the client. Ultimately, all calls end up being handled by SimpleDataPortal
or FactoryDataPortal
, which route the call to an appropriate object. The real question is whether that object will run within a preexisting transactional context or not.
The Transactional
attribute is applied to the data access methods on the business object or factory object. The code in Csla.Server.DataPortal
looks at the object's data access method that will ultimately be invoked by SimpleDataPortal
or FactoryDataPortal
, and it finds the value of the Transactional attribute (if any). Table 15-4 lists the options for this attribute.
Table 15.4. Transactional Options Supported by the Data Portal
Attribute | Result |
---|---|
None | The business object does not run within a preexisting transactional context and so must implement its own transactions using stored procedures or ADO.NET. |
| Same as "none" in the previous entry. |
| The business object runs within a COM+ distributed transactional context. |
| The business object runs within a |
By extending the message router concept to add transactional support, the data portal makes it easy for a business developer to leverage either Enterprise Services or System.Transactions
as needed. At the same time, the complexity of both technologies is reduced by abstracting them within the data portal. If neither technology is appropriate for your needs, you can always choose to not use the Transactional
attribute, and then you can manage the transactions yourself in your data access code.
A key goal for the data portal is to provide a consistent environment for the business objects. At a minimum, this means that both client and server should run under the same user identity (impersonation) and the same culture (localization). The business developer should be able to pass other arbitrary information between client and server as well.
In addition to context information, exception data from the server should flow back to the client with full fidelity. This is important both for debugging and at runtime. The UI often needs to know the specifics about any server-side exceptions in order to properly notify the user about what happened and then to take appropriate steps.
Figure 15-3 shows the objects used to flow data from the client to the server and back again to the client.
The arrows pointing off the left side of the diagram indicate communication with the calling code—typically the business object's factory methods. A business object calls Csla.DataPortal
to invoke one of the data portal operations. Csla.DataPortal
calls Csla.Server.DataPortal
(using the channel adapter classes not shown here), passing a DataPortalContext
object along with the actual client request.
The DataPortalContext
object contains several types of context data, as listed in Table 15-5.
Table 15.5. Context Data Contained Within DataPortalContext
Context Data | Description |
---|---|
| Collection of context data that flows from client to server and then from server back to client; changes on either end are carried across the network |
| Collection of context data that flows from client to server; changes on the server are not carried back to the client |
| Client's |
| A flag indicating whether |
| Client thread's culture, which flows from the client to the server |
| Client thread's UI culture, which flows from the client to the server |
The GlobalContext
and ClientContext
collections are exposed to both client and server code through static
methods on the Csla.ApplicationContext
class. All business object and UI code will use properties on the ApplicationContext
class to access any context data. The LocalContext
property of ApplicationContext
is not transported through the data portal, because it is local to each individual machine.
When a call is made from the client to the server, the client's context data must flow to the server; the data portal does this by using the DataPortalContext
object.
The Csla.Server.DataPortal
object accepts the DataPortalContext
object and uses its data to ensure that the server's context is set up properly before invoking the actual business object code. This means that by the time the business developer's code is running on the server, the server's IPrincipal
, culture, and ApplicationContext
are set to match those on the client.
The exception to this is when using Windows integrated (AD) security. In that case, you must configure the server technology (such as IIS) to use Windows impersonation, or the server will not impersonate the user identity from the client.
There are two possible outcomes of the server-side processing: either it succeeds or it throws an exception.
If the call to the business object succeeds, Csla.Server.DataPortal
returns a DataPortalResult
object back to Csla.DataPortal
on the client. The DataPortalResult
object contains the information listed in Table 15-6.
Csla.DataPortal
puts the GlobalContext
data from DataPortalResult
into the client's Csla.ApplicationContext
, thus ensuring that any changes to that collection on the server are reflected on the client. It then returns the ReturnObject
value as the result of the call itself.
Table 15.6. Context Data Contained Within DataPortalResult
Context Data | Description |
---|---|
| Collection of context data that flows from client to server and then from server back to client; changes on either end are carried across the network |
| The business object being returned from the server to the client as a result of the data portal operation |
You may use the bidirectional transfer of GlobalContext
data to generate a consolidated list of debugging or logging information from the client, to the server, and back again to the client. On the other hand, if an exception occurs on the server—either within the data portal itself or, more likely, within the business object's code—that exception must be returned to the client. Either the business object or the UI on the client can use the exception information to deal with the exception in an appropriate manner.
In some cases, it can be useful to know the exact state of the business object graph on the server when the exception occurred. To this end, the object graph is also returned in the case of an exception. Keep in mind that it is returned as it was at the time of the exception, so the objects are often in an indeterminate state.
If an exception occurs on the server, Csla.Server.DataPortal
catches the exception and wraps it as an InnerException
within a Csla.Server.DataPortalException
object. This DataPortalException
object contains the information listed in Table 15-7.
Table 15.7. Context Data Contained Within Csla.Server.DataPortalException
Context Data | Description |
---|---|
| The actual server-side exception (which may also have |
| The stack trace information for the server-side exception |
| A |
Again, Csla.DataPortal
uses the information in the exception object to restore the ApplicationContext
object's GlobalContext
. Then it throws a Csla.DataPortalException
, which is initialized with the data from the server.
The Csla.DataPortalException
object is designed for use by business object or UI code. It provides access to the business object as it was on the server at the time of the exception. It also overrides the StackTrace
property to append the server-side stack trace to the client-side stack trace, so the result shows the entire stack trace from where the exception occurred on the server all the way back to the client code.
Csla.DataPortal
always throws a Csla.DataPortalException
in case of failure. You must use either its InnerException
or BusinessException
properties, or the GetBaseException()
method to retrieve the original exception that occurred.
In addition to Csla.DataPortal
and Csla.Server.DataPortal
, the types in Table 15-8 are required to implement the context behaviors discussed previously.
Table 15.8. Types Required to Implement Context Passing and Location Transparency
Type | Namespace | Description |
---|---|---|
|
| Provides access to the |
|
| Exception thrown by the data portal in case of any server-side exceptions; the server-side exception is an |
|
| Transfers context data from the client to the server on every data portal operation |
|
| Transfers context and result data from the server to the client on every successful data portal operation |
|
| Transfers context and exception data from the server to the client on every unsuccessful data portal operation |
This infrastructure ensures that business code running on the server will share the same key context data as the client. It also ensures that the client's IPrincipal
object is transferred to the server when the application is using custom authentication. This is important information, not only for basic impersonation, but also for enabling authorization on the server.
The data portal ensures that the application server will use the same principal object as the client when using custom authentication. And when using Windows AD authentication, you can configure your application server to use impersonation, so it runs under the same Windows identity as the client code.
Either way, because the client principal is available on the server, all the authorization features described in Chapter 12 are available to the business developer on both the client and the server. This means the per-property and object-level authorization rules associated with business objects are enforced whether the code is running on the client or server.
You should be aware that Windows integrated security has limits on how far it can impersonate a user. Normally, impersonation can only occur across one network hop, which would be from a client workstation to the application server. Using advanced Windows network configuration options, it may be possible to extend impersonation beyond one hop. Advanced Windows network configuration is outside the scope of this book.
However, some applications may require a higher-level authorization check to decide whether to allow a client request to be processed on the server atall. This check would occur before any attempt is made to invoke the data access methods in the business or factory object.
The data portal supports this concept by allowing a business developer to create an object that implements the IAuthorizeDataPortal
interface from the Csla.Server
namespace. If you want to use this feature, the application server's config file needs to include an entry in the appSettings
element, specifying the assembly qualified name of this class—for example:
<add key="CslaAuthorizationProvider" value="NamespaceName.TypeName, AssemblyName" />
The IAuthorizeDataPortal
interface requires that the class implement a single method: Authorize()
. If the client call should not be allowed, this method should throw an exception; otherwise, the client call will be processed normally.
The data portal creates exactly one instance of the specified type. Because most application servers are multithreaded, you must ensure that the code you write in the Authorize()
method is thread-safe.
This Authorize()
method is invoked after the data portal has restored the client's principal (if using custom authentication), LocalContext
, and GlobalContext
onto the server's thread. The method is passed a request object containing the values listed in Table 15-9.
Table 15.9. Values Provided to the Authorize Method
Property | Description |
---|---|
| Type of business object to be affected by the client request |
| Criteria object or business object provided by the client |
| Data portal operation requested by the client; member of the |
Your authorization class would look like this:
public class CustomAuthorizer : Csla.Server.IAuthorizeDataPortal { public void Authorize(AuthorizationRequest clientRequest) { // perform authorization here // throw exception to stop processing } }
This technique allows high-level control over client requests. If a request is allowed to continue processing, all the normal authorization behaviors described in Chapter 12 continue to apply. This is an optional feature, and by default, all data portal requests are allowed.
Thus far, I've been discussing the data portal in terms of synchronous behaviors. Each call to a static
method of the Csla.DataPortal
class is a synchronous operation.
However, the DataPortal<T>
class provides asynchronous versions of the static methods on the non-generic DataPortal
class. When performing asynchronous operations, it is necessary to have a consistent object through which the completion callback can arrive. This means that you must create an instance of DataPortal<T>
to make asynchronous calls, and each instance can only have one outstanding asynchronous call running at a time.
Using DataPortal<T>
is relatively straightforward.
var dp = new DataPortal<CustomerEdit>(); dp.BeginFetch(new SingleCriteria<CustomerEdit, int>(123), (s, e) => { // process result here });
You can use a lambda (as shown), an anonymous delegate, or a delegate reference to another method to implement the callback. Or you can set up an event handler for the FetchCompleted
event on the DataPortal<T>
object before starting the asynchronous call. The important thing is that the developer remembers that the callback will occur when the asynchronous operation is complete, and that BeginFetch()
is a nonblocking call.
The DataPortal<T>
class is really just a wrapper around the Csla.DataPortal
class, designed to make normal synchronous data portal calls but from a background thread. To create the background threads, the DataPortal<T>
methods use the standard .NET BackgroundWorker
component.
By using the BackgroundWorker
, the data portal gains some important benefits. All the background operations are executed on threads from the .NET thread pool, and all that work is abstracted by the BackgroundWorker
component. More importantly, the BackgroundWorker
component automatically marshals its asynchronous callback events onto the UI thread when running in WPF or Windows Forms.
This means that the event indicating that the background task is complete is running automatically on the UI thread in those environments, which dramatically simplifies related UI code. Even better, this reduces bugs that are commonly encountered where the UI developer forgets to marshal to the UI thread before interacting with visual controls.
Prior to CSLA .NET 3.6, all data portal calls were routed to methods implemented in the business class itself. These methods are called the DataPortal_XYZ
methods—for example, DataPortal_Create()
.
In CSLA .NET 3.6, the concept of an object factory has been introduced as an option a business developer can use instead of the traditional data portal behavior. In this case, the developer puts an ObjectFactory
attribute on the business class, which instructs the data portal to use the specified object factory to handle all persistence operations related to the business object type—for instance:
[ObjectFactory("MyProject.CustomerFactory, MyProject")] public class CustomerEdit : BusinessBase<CustomerEdit>
This instructs the data portal to create an instance of a CustomerFactory
class, using the string
parameter as an assembly-qualified type name for the factory. The CustomerFactory
object must implement methods to create, fetch, update, and delete CustomerEdit
business objects. By default, these methods are named Create(), Fetch(), Update()
, and Delete()
.
The factory object may choose to inherit from ObjectFactory
in the Csla.Server
namespace. The ObjectFactory
class includes protected
methods that make it easier to implement typical object factory behaviors. Here's the shell of an object factory:
public class CustomerFactory : ObjectFactory { public object Create() { // create object and load with defaults MarkNew(result); return result; }
public object Fetch(SingleCriteria<CustomerEdit, int> criteria) { // create object and load with data MarkOld(result); return result; } public object Update(object obj) { // insert/update/delete object and its child objects MarkOld(obj); // make sure to mark all child objects as old too return obj; } public void Delete(SingleCriteria<CustomerEdit, int> criteria) { // delete object data based on criteria } }
The MarkNew()
and MarkOld()
methods manipulate the status of the business object, as discussed in Chapter 8. When using the object factory model, the object factory assumes all responsibility for creating and managing the business object and its state. This includes not only the root object, but all child objects as well.
While factory objects require more work to implement than the traditional data portal technique, they enable the use of some external data access technologies that can't be invoked from inside an already existing business object instance. For example, you may use a data access tool that creates business object instances directly. Obviously, you can't use such a technology from inside an already existing business object, if that technology will be creating the business object instance!
At this point, you should have a good understanding of the various areas of functionality provided by the data portal, and the various classes and types used to implement that functionality. The rest of the chapter will walk through those classes. As with the previous chapters, not all code is shown in this chapter, so you'll want to get the code download for the book to follow along. You can download the code from the Source Code/Download area of the Apress website (www.apress.com/book/view/1430210192
) or from www.lhotka.net/cslanet/download.aspx
.
In order to support persistence—the ability to save and restore from the database—objects need to implement methods that the UI can call. They also need to implement methods that can be called by the data portal on the server.
Figure 15-4 shows the basic process flow when the UI code wants to get a new business object or load a business object from the database.
Following the class-in-charge model from Chapter 2, you can see that the UI code calls a factory method on the business class. The factory method then calls the appropriate method on the Csla.DataPortal
class to create or retrieve the business object. The Csla.Server.DataPortal
object then creates the object and invokes the appropriate data access method (DataPortal_Create() or DataPortal_Fetch()
). The populated business object is returned to the UI, which the application can then use as needed.
Immediate deletion follows the same basic process, with the exception that no business object is returned to the UI as a result.
The BusinessBase
and BusinessListBase
classes implement a Save()
method to make the update process work, as illustrated by Figure 15-5. The process is almost identical to creating or loading an object, except that the UI starts off by calling the Save()
method on the object to be saved, rather than invoking a factory method on the business class.
Chapter 2 discussed the class-in-charge model and factory methods. When the UI needs to create or retrieve a business object, it will call a factory method that abstracts that behavior. You can implement factory methods in any class you choose as either instance or static
methods. I prefer to implement them as static
methods in the business class for the object they create or retrieve, as I think it makes them easier to find. Some people prefer to create a separate factory class with instance methods.
This means a CustomerEdit
class will include static
factory methods such as GetCustomer()
and NewCustomer()
, both of which return a CustomerEdit
object as a result. It may also implement a DeleteCustomer()
method, which would have no return value. The implementation of these methods would typically look like this:
public static CustomerEdit NewCustomer() { return DataPortal.Create<CustomerEdit>(); } public static CustomerEdit GetCustomer(int id) { return DataPortal. Fetch<CustomerEdit>(new SingleCriteria<CustomerEdit, int>(id)); }
public static void DeleteCustomer(int id) { DataPortal.Delete(new SingleCriteria<CustomerEdit, int>(id)); }
These are typical examples of factory methods implemented for most root objects.
Although I won't use the following technique in the rest of the book, you can create a factory class with instance methods if you prefer:
public class CustomerFactory { public virtual CustomerEdit NewCustomer() { return DataPortal.Create<CustomerEdit>(); } public virtual CustomerEdit GetCustomer(int id) { return DataPortal. Fetch<CustomerEdit>(new SingleCriteria<CustomerEdit, int>(id)); } public virtual void DeleteCustomer(int id) { DataPortal.Delete(new SingleCriteria<CustomerEdit, int>(id)); } }
The methods are virtual
, because the primary motivation for using a factory class like this is to allow subclassing of the factory to customize the behavior.
I'll be using static
factory methods throughout the rest of the book.
The factory methods cover creating, retrieving, and deleting objects. This leaves inserting and updating (and deferred deletion). In both of these cases, the object already exists in memory, so the Save()
and BeginSave()
methods are instance methods on any editable object.
The Save()
method allows synchronous save operations and is the simplest to use.
_customer = _customer.Save();
The BeginSave()
method allows asynchronous save operations and is harder to use, because your code must work in an asynchronous manner, including providing a callback handler that is invoked when the operation is complete.
// disable UI elements that can't be used during save _customer.BeginSave(SaveComplete); // ... private void SaveComplete(object sender, SavedEventArgs e) { if (e.Error != null) { // handle exception here }
else { _customer = e.NewObject; // update the UI or other code to use the result // re-enable any disabled UI elements } }
Notice that the address to SaveComplete()
is provided as a parameter to the BeginSave()
method. This means SaveComplete()
is invoked when the asynchronous operation completes. In WPF and Windows Forms, this callback occurs on the UI thread.
Both Save()
and BeginSave()
ultimately do the same thing in that they insert, update, or delete the editable root business object.
One Save()
method can be used to support inserting and updating an object's data because all editable objects have an IsNew property. Recall that the definition of a "new" object is that the object's primary key value doesn't exist in the database. This means that if IsNew
is true
, then Save()
causes an insert operation; otherwise, Save()
causes an update operation.
BusinessBase
and BusinessListBase
are the base classes for all editable business objects, and both of these base classes implement Save()
methods and BeginSave()
.
Synchronous Save Methods
Here are the two overloads for Save()
in BusinessBase
:
public virtual T Save()
{
T result;
if (this.IsChild)
throw new NotSupportedException(
Resources.NoSaveChildException);
if (EditLevel > 0)
throw new Validation.ValidationException(
Resources.NoSaveEditingException);
if (!IsValid && !IsDeleted)
throw new Validation.ValidationException(
Resources.NoSaveInvalidException);
if (IsDirty)
result = (T)DataPortal.Update(this);
else
result = (T)this;
OnSaved(result, null);
return result;
}
public T Save(bool forceUpdate)
{
if (forceUpdate && IsNew)
{
// mark the object as old - which makes it
// not dirty
MarkOld();
// now mark the object as dirty so it can save
MarkDirty(true);
}
return this.Save();
}
The first Save()
method is the primary one that does the real work. It implements a set of common rules that make sense for most objects. Specifically, it does the following:
Ensures that the object is not a child (since child objects must be saved as part of their parent)
Makes sure that the object isn't currently being edited (a check primarily intended to assist with debugging)
Checks to see if the object is valid; invalid objects can't be saved
Checks to make sure the object is dirty; there's no sense saving unchanged data into the database
Notice that the method is virtual
, so if a business developer needs a different set of rules for an object, it is possible to override this method and implement something else.
The second Save()
method exists to support stateless web applications and the scenario where business objects are used to implement XML services. It allows a service author to create a new instance of the object, load it with data, and then force the object to do an update (rather than an insert) operation. The reason for this is that when creating a stateless web page or service that updates data, the web page or application calling the server typically passes all the data needed to update the database; there's no need to retrieve the existing data just to overwrite it. This optional overload of Save()
enables those scenarios.
This is done by first calling MarkOld()
to set IsNew
to false
, and then calling MarkDirty()
to set IsDirty
to true
.
In either case, it is the DataPortal.Update()
call that ultimately triggers the data portal infrastructure to move the object to the application server so it can interact with the database.
It is important to notice that the Save()
method returns an instance of the business object. Recall that .NET doesn't actually move objects across the network; rather, it makes copies of the objects. The DataPortal.Update()
call causes .NET to copy this object to the server so the copy can update itself into the database. That process could change the state of the object (especially if you are using primary keys assigned by the database or timestamps for concurrency). The resulting object is then copied back to the client and returned as a result of the Save()
method.
It is critical that the UI updates all its references to use the new object returned by Save()
. Failure to do this means that the UI will be displaying and editing old data from the old version of the object. Do not call Save()
like this:
_customer.Save();
Do call Save()
like this:
_customer = _customer.Save();
The same basic code can be found in BusinessListBase
as well.
Asynchronous BeginSave Methods
There are four overloads of BeginSave()
, because it can optionally accept the forceUpdate
parameter like Save()
, and it can also optionally accept a delegate reference to a method that will be invoked when the asynchronous operation is complete.
All the overloads invoke the following method:
public virtual void BeginSave(EventHandler<SavedEventArgs> handler)
{
if (this.IsChild)
{
NotSupportedException error = new NotSupportedException(
Resources.NoSaveChildException);
OnSaved(null, error);
if (handler != null)
handler(this, new SavedEventArgs(null, error));
}
else if (EditLevel > 0)
{
Validation.ValidationException error = new Validation.ValidationException(
Resources.NoSaveEditingException);
OnSaved(null, error);
if (handler != null)
handler(this, new SavedEventArgs(null, error));
}
else if (!IsValid && !IsDeleted)
{
Validation.ValidationException error = new Validation.ValidationException(
Resources.NoSaveEditingException);
OnSaved(null, error);
if (handler != null)
handler(this, new SavedEventArgs(null, error));
}
else
{
if (IsDirty)
{
DataPortal.BeginUpdate<T>(this, (o, e) =>
{
T result = e.Object;
OnSaved(result, e.Error);
if (handler != null)
handler(result, new SavedEventArgs(result, e.Error));
});
}
else
{
OnSaved((T)this, null);
if (handler != null)
handler(this, new SavedEventArgs(this, null));
}
}
}
Like the Save()
method, BeginSave()
performs a series of checks to see if the object should really be saved. However, rather than throwing exceptions immediately, the exceptions are routed to the callback handler and to any code that handles the business object's Saved
event.
I chose to do this because it allows the business developer to put all her exception-handling code into the callback handler. If BeginSave()
actually threw exceptions, the business or UI developer would need to handle exceptions when calling BeginSave()
and also in the callback handler, because any asynchronous exceptions would always be handled in the callback handler. The approach taken by this implementation allows the business or UI developer to do all work in one location.
The overall structure and process of BeginSave()
is the same as for Save()
, except that the BeginUpdate()
method is called on the data portal instead of Update()
. A lambda expression is used to handle the asynchronous callback from the data portal method.
DataPortal.BeginUpdate<T>(this, (o, e) => { T result = e.Object; OnSaved(result, e.Error); if (handler != null) handler(result, new SavedEventArgs(result, e.Error)); });
When the asynchronous data portal method completes, the OnSaved()
method is called to raise the Saved
event on the business object, and the callback handler is also invoked if it isn't null
.
Editable objects may contain child objects—either editable child objects or editable child collections (which in turn contain editable child objects). If managed backing fields are used (see Chapter 7) to maintain the child object references, then the field manager can be used to simplify the process of saving the child objects.
The FieldDataManager
class implements an UpdateChildren()
method that can be called by an editable object's insert or update data access method. The UpdateChildren()
method loops through all the managed backing fields and uses the data portal to update all child objects. Here's the code:
public void UpdateChildren(params object[] parameters)
{
foreach (var item in _fieldData)
{
if (item != null)
{
object obj = item.Value;
if (obj is IEditableBusinessObject || obj is IEditableCollection)
Csla.DataPortal.UpdateChild(obj, parameters);
}
}
}
The data portal's UpdateChild()
method is invoked on each child object, which causes the data portal to invoke the appropriate Child_XYZ
method on each child object. I'll discuss this feature of the data portal later in this chapter.
Editable collections contain editable child objects. Updating child objects in an editable collection must be done in a specific manner, because the collection contains a list of deleted items as well as a list of active (nondeleted) items.
The deleted item list must be updated first, so those items are deleted before any active items can be updated. This is because it is quite possible that an active item will replace one of the deleted items, so updating the active list first could result in primary key collisions in the database.
Rather than having the business developer write the code to update all the child objects in every collection, BusinessListBase
implements a Child_Update()
method that does the work. Here's that method:
[EditorBrowsable(EditorBrowsableState.Advanced)]
protected virtual void Child_Update(params object[] parameters)
{
var oldRLCE = this.RaiseListChangedEvents;
this.RaiseListChangedEvents = false;
try
{
foreach (var child in DeletedList)
DataPortal.UpdateChild(child, parameters);
DeletedList.Clear();
foreach (var child in this)
DataPortal.UpdateChild(child, parameters);
}
finally
{
this.RaiseListChangedEvents = oldRLCE;
}
}
The RaiseListChangedEvents
property is set to false
to prevent the collection from raising events during the update process. This helps avoid both performance issues and UI "flickering" as the update occurs (otherwise, these events may cause data binding to refresh the UI as each item is updated).
Then the items in DeletedList
are updated, which really means they are all deleted. This is handled by the data portal's UpdateChild()
method, which invokes each child object's Child_DeleteSelf()
method. Because they've been deleted, DeletedList
is then cleared so it reflects the state of the underlying data store.
Now the active items in the collection are updated, which means they are inserted or updated based on the state of each child object. The data portal's UpdateChild()
method handles those details, calling the appropriate Child_Insert()
or Child_Update()
method.
Finally, the RaiseListChangedEvents
property is restored to its previous value, so the collection can continue to be used normally.
A business developer may explicitly call Child_Update()
in an editable root collection as I discussed earlier in this chapter. Normally, a business developer needs to write no code for an editable child collection, because the data portal will automatically call the Child_Update()
method when the parent object calls FieldManager.UpdateChildren()
to update its children (including the editable child collection).
This completes the features provided by the business object base classes that are required for the data portal to function. Before covering the data portal implementation in detail, I need to discuss how the data portal uses (and avoids) reflection.
The data portal dynamically invokes methods on business and factory objects. Invoking methods dynamically requires the use of reflection, and that can cause performance issues when used frequently. Since the data portal can be used to create, retrieve, update, and delete child objects, it is quite possible for hundreds of methods to be invoked to save just one object graph.
The .NET Framework supports the concept of dynamic method invocation, where reflection is used just once to create a dynamic delegate reference to a method. That delegate is used to actually invoke the method, resulting in performance nearly as good as a strongly typed call to the method.
Of course, that dynamic delegate must be stored somewhere, so we need to cache the delegates and retrieve them from the cache. The end result is that using dynamic delegates is still slower than strongly typed method calls, but it's faster than using reflection on each call.
The Csla.Reflection
namespace includes classes used by the data portal (and other parts of CSLA .NET) to dynamically invoke methods using dynamic delegates. Table 15-10 lists the types in that namespace.
Table 15.10. Types in the Csla.Reflection Namespace
Type | Description |
---|---|
| Thrown when a method can't be invoked dynamically |
| Maintains a reference to a dynamic method delegate, along with related metadata |
| Creates a dynamic method delegate |
| Provides a wrapper around any object to simplify the invocation of dynamic methods on that object |
| Defines the key information for a dynamic method, so the DynamicMethodHandle can be stored and retrieved as necessary |
| Provides an abstract API for use when dynamically calling methods on objects |
Creating and using dynamic method delegates is complex and is outside the scope of this book. You should realize, however, that the MethodCaller
and LateBoundObject
classes are designed as the public entry points to this functionality, and they are used by the data portal implementation.
The heart of the subsystem is the MethodCaller
class. This class exposes the methods listed in Table 15-11; these methods enable the dynamic invocation of methods.
Table 15.11. Public Methods on MethodCaller
Method | Description |
---|---|
| Creates an instance of an object |
| Invokes a method by name, if that method exists on the target object |
| Invokes a method by name, throwing an exception if that method doesn't exist |
| Gets a |
| Gets a |
| Gets a list of |
| Returns a business object type based on a supplied criteria object, taking into account the |
The MethodCaller
class also includes the code to cache dynamic method delegates, and that technique is used automatically when CallMethod()
and CallMethodIfImplemented()
are used. In other words, MethodCaller
uses reflection only to create dynamic method delegates, and it uses those delegates to make all method calls to the target objects. The method delegates are cached for the lifetime of the AppDomain
, so they're typically created only once each time an application is run.
The GetObjectType Method
I do want to explore one method in a little more detail. The GetObjectType()
method is designed specifically to support the data portal, and it's important to understand how it uses criteria objects to identify the business object type. Both Csla.DataPortal
and Csla.Server.DataPortal
use this method to determine the type of business object involved in the data portal request. This method uses the criteria object supplied by the factory method in the business class to find the type of the business object itself.
This method supports the two options discussed in Chapter 5: where the criteria class is nested within the business class, and where the criteria object inherits from Csla.CriteriaBase
(and thus implements ICriteria
).
public static Type GetObjectType(object criteria)
{
var strong = criteria as ICriteria;
if (strong != null)
{
// get the type of the actual business object
// from the ICriteria
return strong.ObjectType;
}
else if (criteria != null)
{
// get the type of the actual business object
// based on the nested class scheme in the book
return criteria.GetType().DeclaringType;
}
else return null;
}
If the criteria object implements ICriteria
, then the code will simply cast the object to type ICriteria
and retrieve the business object type by calling the ObjectType
property.
With a nested criteria class, the code gets the type of the criteria object and then returns the DeclaringType
value from the Type
object. The DeclaringType
property returns the type of the class within which the criteria class is nested.
The LateBoundObject
class is designed to act as a wrapper around any .NET object, making it easy to dynamically invoke methods on that object. It is used like this:
lateBound = new LateBoundObject<CustomerEdit>(_customer); lateBound.CallMethod("SomeMethod", 123, "abc");
Behind the scenes, the wrapper object simply delegates all calls to MethodCaller
, and you could choose to use MethodCaller
directly. The reason for using LateBoundObject
is to write code that is easier to read.
Now let's move on and implement the data portal itself, feature by feature. The data portal is designed to provide a set of core features, including
Implementing a channel adapter
Supporting distributed transactional technologies
Implementing a message router
Transferring context and providing location transparency
The remainder of the chapter will walk through each functional area in turn, discussing the implementation of the classes supporting the concept.
The data portal is exposed to the business developer through the Csla.DataPortal
class. This class implements a set of static
methods to make it as easy as possible for the business developer to create, retrieve, update, or delete objects. All the channel adapter behaviors are hidden behind the Csla.DataPortal
class.
The data portal routes client calls to the server based on the client application's configuration settings in its config file. If the configuration is set to use an actual application server, the client call is sent across the network using the channel adapter pattern. However, there are cases in which the business developer knows that there's no need to send the call across the network—even if the application is configured that way.
The most common example of this is in the creation of new business objects. The DataPortal.Create()
method is called to create a new object, and it in turn triggers a call to the business object's DataPortal_Create()
method or a factory object's Create()
method. Either way, the target method loads the business object with default values from the database.
But what if an object doesn't need to load defaults from the database? In that case, there would be no reason to go across the network at all, and it would be nice to short-circuit the call so that particular object's create method would run on the client.
This is the purpose behind the RunLocal
attribute. A business developer can mark a data access method with this attribute to tell the data portal to force the call to run on the client, regardless of how the application is configured in general. Such a business method would look like this:
[RunLocal] private void DataPortal_Create(Criteria criteria) { // set default values here }
The data portal always invokes this DataPortal_Create()
method without first crossing a network boundary. So if DataPortal.Create()
were called on the client, this method would run on the client, and if DataPortal.Create()
were called on the application server, the DataPortal_Create()
method would run on the server.
When using a factory object, the attribute is applied to the factory method.
public class CustomerFactory : ObjectFactory { [RunLocal] public void Create() { // create object and // set default values here } }
If the assembly containing the factory has been deployed to the client, the data portal will find this attribute and will invoke the Create()
method on the client. If you don't deploy your factory assembly to the client, obviously the code must run on the server, so the data portal would ignore the RunLocal
attribute.
The primary entry point for the entire data portal infrastructure is the Csla.DataPortal
class. Business developers use the methods on this class to trigger all the data portal behaviors. This class is involved in both the channel adapter implementation and in handling context information. This section will focus on the channel adapter code in the class, while I'll discuss the context-handling code later in the chapter.
The Csla.DataPortal
class exposes five primary methods, described in Table 15-12, that can be called by business logic to create, retrieve, update, or delete root objects, or to execute command objects.
Table 15.12. Methods Exposed by the Data Portal for Root Objects
Method | Description |
---|---|
| Calls |
| Calls |
| Calls |
| Calls |
| Calls |
The data portal also includes methods used to create, retrieve, update, or delete child objects. Table 15-13 lists these methods.
Table 15.13. Methods Exposed by the Data Portal for Child Objects
Method | Description |
---|---|
| Calls |
| Calls |
| Calls ChildDataPortal, which then invokes the |
The class also raises two static
events that the business developer or UI developer can handle. The DataPortalInvoke
event is raised before the server is called, and the DataPortalInvokeComplete
event is raised after the server call has returned.
Behind the scenes, each DataPortal
method determines the network protocol to be used when contacting the server in order to delegate the call to Csla.Server.DataPortal
. Of course, Csla.Server.DataPortal
ultimately delegates the call to Csla.Server.SimpleDataPortal
and then to the business object on the server.
The Csla.DataPortal
class is designed to expose static
methods. As such, it is a static
class.
public static class DataPortal
{
}
This ensures that instances of the class won't be created.
Each of the five data portal methods works in a similar manner. I'm not going to walk through all five; instead, I'll discuss the Fetch()
method in some detail, and I'll briefly cover the Update()
method (because it is somewhat unique). First, though, you should be aware of the events raised by the DataPortal
class on each call, and how the data portal connects with the server.
DataPortal Events
The DataPortal
class defines two events: DataPortalInvoke
and DataPortalInvokeComplete
.
public static event Action<DataPortalEventArgs> DataPortalInvoke;
public static event Action<DataPortalEventArgs> DataPortalInvokeComplete;
private static void OnDataPortalInvoke(DataPortalEventArgs e)
{
Action<DataPortalEventArgs> action = DataPortalInvoke;
if (action != null)
action(e);
}
private static void OnDataPortalInvokeComplete(DataPortalEventArgs e)
{
Action<DataPortalEventArgs> action = DataPortalInvokeComplete;
if (action != null)
action(e);
}
These follow the standard approach by providing helper methods to raise the events.
Also notice the use of the Action<T>
generic template. The .NET Framework provides this as a helper when declaring events that have a custom EventArgs
subclass as a single parameter. A corresponding EventHandler<T>
template helps when declaring the standard sender
and EventArgs
pattern for event methods.
A DataPortalEventArgs
object is provided as a parameter to these events. This object includes information of value when handling the event as described in Table 15-14.
Table 15.14. Properties of the DataPortalEventArgs Class
Property | Description |
---|---|
| The data portal context passed to the server |
| The data portal operation requested by the caller |
| Any exception that occurred during processing |
| The type of business object |
This information can be used by code handling the event to better understand all the information being passed to the server as part of the client message.
Creating the Proxy Object
One of the most important functions of Csla.DataPortal
is to determine the appropriate network protocol (if any) to be used when interacting with Csla.Server.DataPortal
. Each protocol, or data portal channel, is managed by a proxy object that implements the IDataPortalProxy
interface from the Csla.DataPortalClient
namespace. This interface ensures that all proxy classes implement the methods required by Csla.DataPortal
.
The proxy object to be used is defined in the application's configuration file. That's the web.config
file for ASP.NET applications, and myprogram.exe.config
for Windows applications (where myprogram
is the name of your program). Within Visual Studio, a Windows configuration file is named app.config
, so I'll refer to them as app.config
files from here forward.
Config files can include an <appSettings>
section to store application settings, and it is in this section that the CSLA .NET configuration settings are located. The following shows how this section would look for an application set to use WCF:
<appSettings> <add key="CslaDataPortalProxy" value="Csla.DataPortalClient.WcfProxy, Csla"/> </appSettings>
The CslaDataPortalProxy
key defines the proxy class that should be used by the data portal. Different proxy objects may require or support other configuration data. In this example, you must also configure WCF itself by including a top-level system.serviceModel
element in your app.config
file—for example:
<system.serviceModel>
<client>
<endpoint name="WcfDataPortal"
address="http://serverName/virtualRoot/WcfPortal.svc"
binding="wsHttpBinding"
contract="Csla.Server.Hosts.IWcfPortal" />
</client>
</system.serviceModel>
Normally only the highlighted line needs to be changed to properly specify the URL to the application server.
The GetDataPortalProxy()
method uses this information to create an instance of the correct proxy object.
private static Type _proxyType;
private static DataPortalClient.IDataPortalProxy
GetDataPortalProxy(bool forceLocal)
{
if (forceLocal)
{
return new DataPortalClient.LocalProxy();
}
else
{
Csla.DataPortalClient.IDataPortalProxy portal;
string proxyTypeName = ApplicationContext.DataPortalProxy;
if (proxyTypeName == "Local")
portal = new DataPortalClient.LocalProxy();
else
{
if (_proxyType == null)
{
_proxyType = Type.GetType(proxyTypeName, true, true);
}
portal = (DataPortalClient.IDataPortalProxy)
Activator.CreateInstance(_proxyType);
}
return portal;
}
}
The proxy object is created on each call to avoid possible threading issues. Since the data portal can be used asynchronously, it is important to avoid the use of static
fields because they'd be shared across multiple threads. The alternative is to use instance fields, but then locking code is required, and locking can lead to performance issues.
Notice that the _proxyType
field is static
, so it's shared across all threads. The data portal configuration comes from the config file and is the same for all threads in the application, so this value can be safely shared.
If the forceLocal
parameter is true
, then only a local proxy is returned. The LocalProxy
object is a special proxy that doesn't use any network protocols at all, but rather runs the "server-side" data portal components directly within the client process. I'll cover this class later in the chapter.
When forceLocal
is false
, the real work begins. First, the proxy string is retrieved from the CslaDataPortalProxy
key in the config file by calling the ApplicationContext.DataPortalProxy
property. The property reads the config file and returns the value associated with the CslaDataPortalProxy
key.
If that key value is "Local
", then again an instance of the LocalProxy
class is created and returned. The ApplicationContext.DataPortalProxy
method also returns a LocalProxy
object if the key is not found in the config file. This makes LocalProxy
the default proxy.
If some other config value is returned, then it is parsed and used to create an instance of the appropriate proxy class.
if (_proxyType == null) { string typeName = proxyTypeName.Substring(0, proxyTypeName.IndexOf(",")).Trim(); string assemblyName = proxyTypeName.Substring(proxyTypeName.IndexOf(",") + 1).Trim(); _proxyType = Type.GetType(typeName + "," + assemblyName, true, true); } portal = (DataPortalClient.IDataPortalProxy) Activator.CreateInstance(_proxyType);
In the preceding <appSettings>
example, notice that the value is a comma-separated value with the full class name on the left and the assembly name on the right. This follows the .NET standard for describing classes that are to be dynamically loaded.
The config value is parsed to pull out the full type name and assembly name. Then Activator.CreateInstance()
is called to create an instance of the object. The .NET runtime automatically loads the assembly if needed.
The result is that the appropriate proxy object is returned for use by the data portal in communicating with the server-side components.
Root Object Data Access Methods
The five data portal methods listed in Table 15-12 are all relatively similar in that they follow the same basic process. I'll walk through the Fetch()
method in some detail so you can see how it works. All five methods follow this basic flow:
Ensure the user is authorized to perform the action.
Get metadata for the business method to be ultimately invoked.
Get the data portal proxy object.
Create a DataPortalContext
object.
Raise the DataPortalInvoke
event.
Delegate the call to the proxy object (and thus to the server).
Handle and throw any exceptions.
Restore the GlobalContext
returned from the server.
Raise the DataPortalInvokeComplete
event.
Return the resulting business object (if appropriate).
Let's look at the Fetch()
method in detail.
The Fetch Method
There are several Fetch()
method overloads, all of which ultimately delegate to the actual implementation.
public static object Fetch(Type objectType, object criteria)
{
Server.DataPortalResult result = null;
Server.DataPortalContext dpContext = null;
try
{
OnDataPortalInitInvoke(null);
if (!Csla.Security.AuthorizationRules.CanGetObject(objectType))
throw new System.Security.SecurityException(
string.Format(Resources.UserNotAuthorizedException
,"get"
,objectType.Name));
var method =
Server.DataPortalMethodCache.GetFetchMethod(objectType, criteria);
DataPortalClient.IDataPortalProxy proxy;
proxy = GetDataPortalProxy(method.RunLocal);
dpContext =
new Server.DataPortalContext(GetPrincipal()
,proxy.IsServerRemote);
OnDataPortalInvoke(new DataPortalEventArgs(
dpContext, objectType, DataPortalOperations.Fetch));
try
{
result = proxy.Fetch(objectType, criteria, dpContext);
}
catch (Server.DataPortalException ex)
{
result = ex.Result;
if (proxy.IsServerRemote)
ApplicationContext.SetGlobalContext(result.GlobalContext);
string innerMessage = string.Empty;
if (ex.InnerException is Csla.Reflection.CallMethodException)
{
if (ex.InnerException.InnerException != null)
innerMessage = ex.InnerException.InnerException.Message;
}
else
{
innerMessage = ex.InnerException.Message;
}
throw new DataPortalException(
String
.Format("DataPortal.Fetch {0} ({1})", Resources.Failed, innerMessage)
,ex.InnerException, result.ReturnObject);
}
if (proxy.IsServerRemote)
ApplicationContext.SetGlobalContext(result.GlobalContext);
OnDataPortalInvokeComplete(new DataPortalEventArgs(
dpContext, objectType, DataPortalOperations.Fetch));
}
catch (Exception ex)
{
OnDataPortalInvokeComplete(new DataPortalEventArgs(
dpContext, objectType, DataPortalOperations.Fetch, ex));
throw;
}
return result.ReturnObject;
}
The generic overloads simply cast the result so the calling code doesn't have to.
public static T Fetch<T>()
{
return (T)Fetch(typeof(T), EmptyCriteria);
}
Remember that the data portal can return virtually any type of object, so the actual Fetch()
method implementation must deal with results of type object
.
Looking at the code, you should see all the steps listed in the preceding bulleted list. The first is to ensure the user is authorized.
if (!Csla.Security.AuthorizationRules.CanGetObject(objectType)) throw new System.Security.SecurityException( string.Format(Resources.UserNotAuthorizedException, "get", objectType.Name));
Then the DataPortalMethodCache
is used to retrieve (or create) the metadata for the business method that will ultimately be invoked on the server.
var method = Server.DataPortalMethodCache.GetFetchMethod(objectType, criteria);
The result is a DataPortalMethodInfo
object that contains metadata for the business method. At this point, the most important bit of information is whether the RunLocal
attribute has been applied to the method on the business class. This Boolean value is used as a parameter to the GetDataPortalProxy()
method, which returns the appropriate proxy object for server communication.
DataPortalClient.IDataPortalProxy proxy; proxy = GetDataPortalProxy(method.RunLocal);
Next, a DataPortalContext
object is created and initialized. The details of this object and the means of dealing with context information are discussed later in the chapter.
dpContext = new Server.DataPortalContext(GetPrincipal(), proxy.IsServerRemote);
Then the DataPortalInvoke
event is raised, notifying client-side business or UI logic that a data portal call is about to take place.
OnDataPortalInvoke(new DataPortalEventArgs( dpContext, objectType, DataPortalOperations.Fetch));
Finally, the Fetch()
call itself is delegated to the proxy object.
result = proxy.Fetch(objectType, criteria, dpContext);
All a proxy object does is relay the method call across the network to Csla.Server.DataPortal
, so you can almost think of this as delegating the call directly to Csla.Server.DataPortal
, which in turn delegates to either FactoryDataPortal
or SimpleDataPortal
. The ultimate result is that a factory object's Fetch()
method or the business object's DataPortal_Fetch()
method is invoked on the server.
Remember that the default is that the "server-side" code actually runs in the client process on the client workstation (or web server). Even so, the full sequence of events described here occur—just much faster than if network communication were involved.
An exception could occur while calling the server. Most likely, the exception could occur in the business logic running on the server, though exceptions can also occur due to network issues or similar problems. When an exception does occur in business code on the server, it will be reflected here as a Csla.Server.DataPortalException
, which is caught and handled.
result = ex.Result; if (proxy.IsServerRemote) ApplicationContext.SetGlobalContext(result.GlobalContext); string innerMessage = string.Empty; if (ex.InnerException is Csla.Reflection.CallMethodException) { if (ex.InnerException.InnerException != null) innerMessage = ex.InnerException.InnerException.Message; } else { innerMessage = ex.InnerException.Message; } throw new DataPortalException( String. Format("DataPortal.Fetch {0} ({1})", Resources.Failed, innerMessage), ex.InnerException, result.ReturnObject);
The Csla.Server.DataPortalException
returns the business object from the server—exactly as it was when the exception occurred. It also returns the GlobalContext
information from the server so that it can be used to update the client's context data. Ultimately, the data from the server is used to create a Csla.DataPortalException
that is thrown back to the business object. It can be handled by the business object or the UI code as appropriate.
Notice that the Csla.DataPortalException
object contains not only all the exception details from the server, but also the business object from the server. This object can be useful when debugging server-side exceptions.
More commonly, an exception won't occur. In that case, the result returned from the server includes the GlobalContext
data from the server, which is used to update the context on the client.
if (proxy.IsServerRemote) ApplicationContext.SetGlobalContext (result);
The details around context are discussed later in the chapter. With the server call complete, the DataPortalInvokeComplete
event is raised.
OnDataPortalInvokeComplete(new DataPortalEventArgs( dpContext, objectType, DataPortalOperations.Fetch));
Finally, the business object created and loaded with data on the server is returned to the factory method that called DataPortal.Fetch()
in the first place.
Remember that in a physical n-tier scenario, this is a copy of the object that was created on the server. .NET serialized the object on the server, transferred its data to the client, and deserialized it on the client. This object being returned as a result of the Fetch()
method exists on the client workstation, so other client-side objects and UI components can use it in an efficient manner.
The Create()
and Delete()
methods are virtually identical. The only meaningful difference is that the Delete()
method has no return value.
The Update Method
The Update()
method is similar, but it doesn't get a criteria object as a parameter. Instead, it gets passed the business object itself.
public static object Update(object obj)
This way, it can pass the business object to Csla.Server.DataPortal
, which ultimately calls the factory or business object's insert, update, or delete method, causing the object's data to be used to update the database. It also checks to see if the business object inherits from Csla.CommandBase
, and if so, it invokes the object's execute method instead (or the factory object's update method).
The reason the Update()
method is so different is because it has to use the business object's state to determine what method will be invoked. This information is necessary so the method can be checked for a RunLocal
attribute. To do this, the method must first determine whether the ObjectFactory
attribute has been applied to the business class.
var factoryInfo =
ObjectFactoryAttribute.GetObjectFactoryAttribute(objectType);
if (factoryInfo != null)
When using a factory object, either an update or delete method will be invoked. The actual method names come from the ObjectFactory
attribute as well.
var factoryType = FactoryDataPortal.FactoryLoader.GetFactoryType(
factoryInfo.FactoryTypeName);
if (obj is Core.BusinessBase && ((Core.BusinessBase)obj).IsDeleted)
{
if (!Csla.Security.AuthorizationRules.CanDeleteObject(objectType))
throw new System.Security.SecurityException(
string.Format(Resources.UserNotAuthorizedException
,"delete"
,objectType.Name));
method = Server.DataPortalMethodCache.GetMethodInfo(
factoryType, factoryInfo.DeleteMethodName, new object[] { obj });
}
else
{
if (!Csla.Security.AuthorizationRules.CanEditObject(objectType))
throw new System.Security.SecurityException(
string.Format(Resources.UserNotAuthorizedException
,"save"
,objectType.Name));
method = Server.DataPortalMethodCache.GetMethodInfo(
factoryType, factoryInfo.UpdateMethodName, new object[] { obj });
}
Notice that the factory object type is retrieved using a FactoryLoader
property from the FactoryDataPortal
class. I'll discuss this later in the chapter. For now, it is enough to realize that this property returns an object that can provide the type of the factory that will be invoked.
If the business object is a subclass of BusinessBase
, then its IsDeleted
property is used to determine whether the delete or update method will be invoked. The appropriate authorization check is also made. The end result here is that the method
field contains the metadata for the method to be invoked so the RunLocal
attribute's presence can be determined.
If there's no ObjectFactory
attribute, then the traditional DataPortal_XYZ
methods will be invoked on the business object itself. In this case, the business object's state is again used to determine whether DataPortal_Insert(), DataPortal_Update(), DataPortal_DeleteSelf()
, or DataPortal_Execute()
will be invoked. And the method
field is set to contain the metadata for that method.
The rest of the process is fundamentally the same as Fetch()
. The proxy is created, the DataPortalInvoke
event raised, the call is delegated to the proxy, and the result is processed.
Child Object Data Access Methods
The methods listed in Table 15-13 are used to create, retrieve, and update child objects. These methods are quite different from the methods I just discussed that deal with root objects.
When dealing with root objects, the data portal uses all the features I've discussed in this chapter, including the channel adapter, message router, distributed transaction support, and so forth. However, when dealing with child objects, the data portal assumes it is already in the process of working with a root object. So the assumption is that the code is already on the right physical computer and in the right transactional context. The data portal doesn't need to worry about any of those details when dealing with child objects.
This means that the data portal's only responsibility when dealing with child objects is to refer the call to the ChildDataPortal
class in the Csla.Server
namespace. I'll discuss this class later in the chapter, but for now it is enough to know that ChildDataPortal
will invoke the child object's Child_XYZ
methods.
Since all three methods listed in Table 15-13 are virtually identical, I'll show the code for just one here:
public static T FetchChild<T>(params object[] parameters)
{
Server.ChildDataPortal portal = new Server.ChildDataPortal();
return (T)(portal.Fetch(typeof(T), parameters));
}
The method simply creates an instance of ChildDataPortal
and delegates the call to that object.
What is interesting about this code is that the FetchChild()
method accepts a params
parameter. This means the calling code can pass in virtually any value or list of parameter values, and those values are passed along to ChildDataPortal
and ultimately to the Child_XYZ
method.
The data portal supports asynchronous operations through the DataPortal<T>
class in the Csla
namespace. This class has the five instance methods shown in Table 15-15, which are similar to the static methods of Csla.DataPortal
.
Table 15.15. Methods on the DataPortal<T> Class
Method | Description |
---|---|
| Starts an asynchronous create operation |
| Starts an asynchronous fetch operation |
| Starts an asynchronous update operation |
| Starts an asynchronous delete operation |
| Starts an asynchronous execute operation |
Each of these methods delegates to Csla.DataPortal
to do the real work, so you've already seen how the complex parts work. These asynchronous methods use the .NET BackgroundWorker
component to take care of the threading details. This means the asynchronous work runs on a thread from the .NET thread pool, and that in WPF and Windows Forms applications, the asynchronous callbacks occur on the UI thread automatically.
Of course, the code calling DataPortal<T>
needs to be notified when the asynchronous operation is complete. For each of the methods in Table 15-15, there is a corresponding event. For example, a FetchCompleted
event corresponds to BeginFetch()
.
public event EventHandler<DataPortalResult<T>> FetchCompleted;
Calling code can subscribe to this event and then call the data portal method.
var dp = new DataPortal<CustomerEdit>(); dp.FetchCompleted += GotCustomerEdit; dp.BeginFetch();
In this case, when the async operation completes, the GotCustomerEdit()
method is invoked automatically.
The BeginFetch()
method has two overloads: one to accept a criteria parameter, and the other without. They both work the same way.
public void BeginFetch(object criteria)
{
var bw = new System.ComponentModel.BackgroundWorker();
bw.RunWorkerCompleted += Fetch_RunWorkerCompleted;
bw.DoWork += Fetch_DoWork;
bw.RunWorkerAsync(new DataPortalAsyncRequest(criteria));
}
The method creates a BackgroundWorker
object, sets up handlers for the RunWorkerCompleted
and DoWork
events, and then starts the background task. It is important to realize that this method does not block; instead, it completes and returns immediately, while the data portal request is being processed on a background thread.
The DoWork
handler executes the background task.
private void Fetch_DoWork(
object sender, System.ComponentModel.DoWorkEventArgs e)
{
var request = e.Argument as DataPortalAsyncRequest;
SetThreadContext(request);
object state = request.Argument;
T result = default(T);
if (state is int)
result = (T)Csla.DataPortal.Fetch<T>();
else
result = (T)Csla.DataPortal.Fetch<T>(state);
e.Result = new DataPortalAsyncResult(
result, Csla.ApplicationContext.GlobalContext);
}
This method runs on the background thread and delegates the call to Csla.DataPortal
to do the real work. However, it must deal with context information because ApplicationContext
manages its data on a per-thread basis. In other words, the ClientContext
and GlobalContext
values must be set on the background thread before calling Csla.DataPortal
, so the data portal can send them to the application server like normal.
Also notice that GlobalContext
is included as part of the result returned through the DataPortalAsyncResult
object. Again, this value must be taken from the background thread and made available to the calling thread.
The BackgroundWorker
component's RunWorkerCompleted
event handler is invoked when DoWork
is complete, either successfully or due to an exception.
private void Fetch_RunWorkerCompleted(
object sender, System.ComponentModel.RunWorkerCompletedEventArgs e)
{
var result = e.Result as DataPortalAsyncResult;
if (result != null)
{
_globalContext = result.GlobalContext;
if (result.Result != null)
OnFetchCompleted(new DataPortalResult<T>((T)result.Result, e.Error));
else
OnFetchCompleted(new DataPortalResult<T>(default(T), e.Error));
}
else
OnFetchCompleted(new DataPortalResult<T>(default(T), e.Error));
}
In a WPF or Windows Forms application, this method will run on the UI thread. In other environments, such as ASP.NET, it will run on an indeterminate thread, so an ASP.NET developer must be aware that her callback will also occur on an indeterminate thread.
This method raises the FetchCompleted
event from the DataPortal<T>
object by calling OnFetchCompleted()
. That event notifies the calling code that the async operation is complete.
Notice that the GlobalContext
returned from the data portal call is placed into a _globalContext
field, and is thus exposed through a GlobalContext
property on the DataPortal<T>
object. The value is not used to replace the GlobalContext
on the calling thread, because numerous async operations could be running at once, and they'd each overwrite the GlobalContext
values. By providing the resulting context through a property, it is up to the calling code to decide what to do with any returned context values.
The other four methods work the same way, essentially wrapping Csla.DataPortal
calls so they occur on a background thread, with the results returned through an event.
At this point, the role of Csla.DataPortal
and DataPortal<T>
as gateways to the data portal overall should be clear. The other end of the channel adapter is the Csla.Server.DataPortal
class, which is also the entry point to the message router pattern. I'll discuss the details of Csla.Server.DataPortal
later in the chapter as part of the message router section.
First, though, I want to walk through the Local
and WcfProxy
classes, and corresponding host classes, used to implement the primary data portal channels supported by CSLA .NET.
Each channel comes in two parts: a proxy on the client and a host on the server. Csla.DataPortal
calls the proxy, which in turn transfers the call to the host by using its channel. The host then delegates the call to a Csla.Server.DataPortal
object. To ensure that all the parts of this chain can reliably interact, there are two interfaces: Csla.Server.IDataPortalServer
and Csla.DataPortalClient.IDataPortalProxy
.
The IDataPortalServer
interface defines the methods common across the entire process.
public interface IDataPortalServer
{
DataPortalResult Create(
Type objectType, object criteria, DataPortalContext context);
DataPortalResult Fetch(
Type objectType, object criteria, DataPortalContext context);
DataPortalResult Update(object obj, DataPortalContext context);
DataPortalResult Delete(
Type objectType, object criteria, DataPortalContext context);
}
Notice that these are the same method signatures as implemented in the static
methods on Csla.DataPortal
, making it easy for that class to delegate its calls through a proxy and host all the way to Csla.Server.DataPortal
.
All the proxy classes implement a common Csla.DataPortalClient.IDataPortalProxy
interface so they can be used by Csla.DataPortal
. This interface inherits from Csla.Server.IDataPortalServer
, ensuring that all proxy classes will have the same methods as all server-side host classes.
public interface IDataPortalProxy : Server.IDataPortalServer
{
bool IsServerRemote { get;}
}
In addition to the four data methods, proxy classes need to report whether they interact with an actual server-side host or not. As you'll see, the LocalProxy
interacts with a client-side host, while WcfProxy
interacts with a remote host. Recall that in Csla.DataPortal
, the IsServerRemote
property was used to control whether the context data was set and restored. If the "server-side" code is running inside the client process, then much of that work can be bypassed, improving performance.
The default option for a "network" channel is not to use the network at all, but rather to run the "server-side" code inside the client process. This option offers the best performance, though possibly at the cost of security or scalability. The various trade-offs of n-tier deployments were discussed in Chapter 1.
Even when running the "server-side" code in-process on the client, the data portal uses a proxy for the local "channel": Csla.DataPortalClient.LocalProxy
. As with all proxy classes, this one implements the Csla.DataPortalClient.IDataPortalProxy
interface, exposing a standard set of methods and properties for use by Csla.DataPortal
.
Because this proxy doesn't actually use a network protocol, it is the simplest of all the proxies.
public class LocalProxy : DataPortalClient.IDataPortalProxy
{
private Server.IDataPortalServer _portal =
new Server.DataPortal();
public DataPortalResult Create(
Type objectType, object criteria, DataPortalContext context)
{
return _portal.Create(objectType, criteria, context);
}
public DataPortalResult Fetch(
Type objectType, object criteria, DataPortalContext context)
{
return _portal.Fetch(objectType, criteria, context);
}
public DataPortalResult Update(object obj, DataPortalContext context)
{
return _portal.Update(obj, context);
}
public DataPortalResult Delete(
Type objectType, object criteria, DataPortalContext context)
{
return _portal.Delete(objectType, criteria, context);
}
public bool IsServerRemote
{
get { return false; }
}
}
All this proxy does is directly create an instance of Csla.Server.DataPortal
.
private Server.IDataPortalServer _portal = new Server.DataPortal();
Each of the data methods (Create(), Fetch()
, etc.) simply delegates to this object. The result is that the client call is handled by a Csla.Server.DataPortal
object running within the client AppDomain and on the client's thread. Due to this, the IsServerRemote
property returns false
.
More interesting is the WCF channel. This is implemented on the client by the WcfProxy
class, and on the server by the WcfPortal
class. When Csla.DataPortal
delegates a call into WcfProxy
, it uses WCF to pass that call to a WcfPortal
object on the server. That object then delegates the call to a Csla.Server.DataPortal
object.
Because WCF automatically serializes objects across the network, the WcfProxy
class is not much more complex than LocalProxy
. It relies on standard WCF configuration to determine how to communicate with the application server. WCF configuration is provided through a system.serviceModel
element in the app.config
file, and in this element the developer provides a client endpoint. That endpoint specifies the address, binding, and contract for the server component.
The client endpoint has a name, which WCF uses to locate the right endpoint in the config file. That name defaults to WcfDataPortal
, but can be overridden by setting the static
property named EndPoint
in the WcfProxy
class.
Csla.DataPortalClient.WcfProxy.EndPoint = "CustomDataPortalName";
The data portal methods use the name to retrieve the correct WCF configuration. All the methods work the same; here's the Fetch()
method:
public DataPortalResult Fetch(
Type objectType, object criteria, DataPortalContext context)
{
ChannelFactory<IWcfPortal> cf = new ChannelFactory<IWcfPortal>(_endPoint);
IWcfPortal svr = cf.CreateChannel();
WcfResponse response =
svr.Fetch(new FetchRequest(objectType, criteria, context));
cf.Close();
object result = response.Result;
if (result is Exception)
throw (Exception)result;
return (DataPortalResult)result;
}
Each method gets a ChannelFactory
corresponding to the specified endpoint, and uses that ChannelFactory
to create a proxy reference to the server.
ChannelFactory<IWcfPortal> cf = new ChannelFactory<IWcfPortal>(_endPoint); IWcfPortal svr = cf.CreateChannel();
The server is then called.
WcfResponse response = svr.Fetch(new FetchRequest(objectType, criteria, context));
Finally, the result (exception or not) is handled.
object result = response.Result; if (result is Exception) throw (Exception)result; return (DataPortalResult)result;
The reason this is so simple is that WCF handles virtually all the details. The only limitation on the use of WCF is that only synchronous bindings are supported. This means the most common bindings—HTTP and TCP—work fine, as do named pipes or any other synchronous network protocol.
You've seen the client proxy for the WCF channel. It requires that a WcfPortal
object, implementing an IWcfPortal
interface, be hosted by the application server.
The WcfPortal
object's job is simple. It accepts a call from the client and delegates it to an instance of Csla.Server.DataPortal
. Of course, it is a WCF service, so more important than the WcfPortal class is the IWcfPortal
interface that defines the service contract.
[ServiceContract(Namespace="http://ws.lhotka.net/WcfDataPortal")]
public interface IWcfPortal
{
[OperationContract]
[UseNetDataContract]
WcfResponse Create(CreateRequest request);
[OperationContract]
[UseNetDataContract]
WcfResponse Fetch(FetchRequest request);
[OperationContract]
[UseNetDataContract]
WcfResponse Update(UpdateRequest request);
[OperationContract]
[UseNetDataContract]
WcfResponse Delete(DeleteRequest request);
}
This interface defines the four methods supported by the IDataPortalServer
interface, but it doesn't actually conform to that interface. The reason is that this is designed as a service-oriented interface, where each method accepts a request message and returns a response message.
Each message type, such as FetchRequest
, is a data contract, described with the DataContract
and DataMember
attributes. You can look at the code in the download to see what property values are passed to and from the data portal through these request and response messages.
Notice the UseNetDataContract
attribute on each operation in the interface. By default, WCF uses a serializer called the DataContractSerializer
. Unfortunately, this serializer is not capable of serializing an object graph such that you can deserialize the byte stream to get an exact clone of the original graph. Luckily, WCF includes the NetDataContractSerializer
that does provide the required functionality, working with objects marked as Serializable
or DataContract
.
The UseNetDataContract
attribute is a custom attribute provided by CSLA .NET that tells WCF to use the NetDataContractSerializer
instead of the DataContractSerializer
when passing data to and from the server.
The WcfPortal
class itself simply implements the IWcfPortal
interface. It has a couple of WCF attributes on the class itself.
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
[AspNetCompatibilityRequirements(RequirementsMode =
AspNetCompatibilityRequirementsMode.Allowed)]
public class WcfPortal : IWcfPortal
The InstanceContextMode
value is used to specify that each WCF call should be handled by an independent instance of the WcfPortal
object. This helps ensure isolation between data portal calls.
The AspNetCompatibilityRequirements
attribute is used to indicate that the service can run in ASP compatibility mode if requested. This is necessary to allow some Windows identity impersonation scenarios.
The WcfPortal
class implements the four data portal methods. Each one works the same way. Here's the Fetch()
method:
public WcfResponse Fetch(FetchRequest request)
{
Csla.Server.DataPortal portal = new Csla.Server.DataPortal();
object result;
try
{
result =
portal.Fetch(request.ObjectType, request.Criteria, request.Context);
}
catch (Exception ex)
{
result = ex;
}
return new WcfResponse(result);
}
The method simply accepts the client's call, creates an instance of Csla.Server.DataPortal
, and delegates the call.
It catches all exceptions and returns any exception as part of the response message. This is important, because this technique allows the entire server-side stack trace and other exception data to flow back to the calling code on the client. That makes debugging much easier than if all that was returned were the exception message text.
The proxy and host classes for .NET Remoting, Web Services, and Enterprise Services all work in a similar manner. As these technologies are all effectively replaced by WCF, I won't discuss them in detail in this book.
At this point, you've seen the code that implements the channel adapter, including the Csla.DataPortal
class used by business developers and the LocalProxy
and WcfProxy
implementations, along with the WCF host. Let's move on now to discuss the server-side portions of the data portal, starting with distributed transaction support, and then move on to the message router pattern.
Though it may use different network channels to do its work, the primary job of Csla.DataPortal
is to delegate the client's call to an object on the server. This object is of type Csla.Server.DataPortal
, and its primary responsibility is to route the client's call to Csla.Server.SimpleDataPortal
, which actually implements the message router behavior.
Csla.Server.DataPortal
is involved in this process so it can establish a distributed transactional context if requested by the business object. The CSLA .NET framework allows a business developer to choose between handling transactions manually, using Enterprise Services (COM+) transactions, or using System.Transactions
.
The business developer indicates his preference through the use of the custom Csla.TransactionalAttribute
. Earlier in the chapter, Table 15-4 listed all the possible options when using this attribute on a DataPortal_XYZ
or factory object method.
The TransactionalTypes
enumerated list contains all the options that can be specified with the Transactional
attribute when it is applied to a data access method.
public enum TransactionalTypes
{
EnterpriseServices
,TransactionScope
,Manual
}
This type is used to define the parameter value for the constructor in the TransactionalAttribute
class.
Ultimately, all client calls go through the channel adapter and are handled on the server by an instance of Csla.Server.DataPortal
. This object uses the value of the Transactional
attribute (if any) on the data access method to determine how to route the call to the DataPortalSelector
. The call will go via one of the following three routes:
The Manual
option routes directly to DataPortalSelector
.
The EnterpriseServices
option routes through ServicedDataPortal
.
The TransactionScope
option routes through TransactionalDataPortal
.
The Csla.Server.DataPortal
object also takes care of establishing the correct context on the server based on the context provided by the client. I discuss the details of this process later in the chapter.
Csla.Server.DataPortal
implements IDataPortalServer
, and thus the four data methods. Each of these methods follows the same basic flow:
Set up the server's context.
Execute any attached IAuthorizeDataPortal
object.
Get the metadata for the data access method to be invoked.
Check the Transactional
attribute on that method metadata.
Route the call based on the Transactional
attribute.
Clear the server's context.
Return the result provided by DataPortalSelector
.
As with most of the data portal classes, all the methods operate in much the same way. I'll use Fetch()
and Update()
as examples.
The Fetch Method
The Fetch()
method implements the steps listed previously.
public DataPortalResult Fetch(
Type objectType, object criteria, DataPortalContext context)
{
try
{
SetContext(context);
Authorize(new AuthorizeRequest(
objectType, criteria, DataPortalOperations.Fetch));
DataPortalResult result;
DataPortalMethodInfo method =
DataPortalMethodCache.GetFetchMethod(objectType, criteria);
IDataPortalServer portal;
switch (method.TransactionalType)
{
case TransactionalTypes.EnterpriseServices:
portal = new ServicedDataPortal();
try
{
result = portal.Fetch(objectType, criteria, context);
}
finally
{
((ServicedDataPortal)portal).Dispose();
}
break;
case TransactionalTypes.TransactionScope:
portal = new TransactionalDataPortal();
result = portal.Fetch(objectType, criteria, context);
break;
default:
portal = new DataPortalSelector();
result = portal.Fetch(objectType, criteria, context);
break;
}
return result;
}
catch (Csla.Server.DataPortalException ex)
{
Exception tmp = ex;
throw;
}
catch (Exception ex)
{
throw new DataPortalException(
"DataPortal.Fetch " + Resources.FailedOnServer
,ex, new DataPortalResult());
}
finally
{
ClearContext(context);
}
}
After setting the server's context (a topic I discuss later in the chapter), the Authorize()
method is called. This method uses the value of CslaAuthorizationProvider
from the config file to create and call the Authorize()
method on an IAuthorizeDataPortal
implementation as discussed earlier in this chapter. If no value is provided in the config file, a default Authorize()
implementation is invoked that allows all calls to execute.
The metadata for the data access method is then retrieved from the method cache (or loaded into the cache if this is the first time the method has been called).
DataPortalMethodInfo method = DataPortalMethodCache.GetFetchMethod(objectType, criteria);
This uses the same technique you saw in Csla.DataPortal
earlier in the chapter. If the data portal is running in 2-tier or local mode, then the "client" and "server" code share the same cache. If the data portal is running in 3-tier or remote mode, then the client and server each maintain their own cache of method metadata.
The DataPortalMethodInfo
object includes a property that returns the Transactional
attribute value for the data access method. If there is no Transactional
attribute on the method, then the Manual
type is returned as a default. The resulting value is used in a switch
statement to properly route the call. If EnterpriseServices
was specified, then an instance of Csla.Server.ServicedDataPortal
is created and the call is delegated to that object.
case TransactionalTypes.EnterpriseServices: portal = new ServicedDataPortal(); try { result = portal.Fetch(objectType, criteria, context); } finally { ((ServicedDataPortal)portal).Dispose(); } break;
As with all Enterprise Services objects, a try...finally
block is used to ensure that the object is properly disposed when the call is complete. I'll cover the details of the ServicedDataPortal
class shortly.
If TransactionScope
was specified, then an instance of Csla.Server.TransactionalDataPortal
is created and the call is delegated to that object.
case TransactionalTypes.TransactionScope: portal = new TransactionalDataPortal(); result = portal.Fetch(objectType, criteria, context); break;
I'll cover details of the TransactionalDataPortal
class shortly.
Finally, the default is to allow the business developer to handle any transactions manually. In that case, an instance of Csla.Server.DataPortalSelector
is created directly, and the call is delegated to that object.
default: portal = new DataPortalSelector(); result = portal.Fetch(objectType, criteria, context); break;
Both ServicedDataPortal
and TransactionalDataPortal
delegate their calls to DataPortalSelector
too—so in the end, DataPortalSelector
handles all client calls. By calling it directly, without involving any transactional technologies, this default approach allows the business developer to handle any transactions as she sees fit.
Once the Fetch()
call is complete, the server's context is cleared (details discussed later), and the result is returned to the client.
return result;
If an exception occurs during the processing, it is caught, the server's context is cleared, and the exception is rethrown so it can be handled by Csla.DataPortal
, as discussed earlier in the chapter.
The Create()
and Delete()
methods are virtually identical to Fetch()
. However, the Update()
method is a little different.
The Update Method
The Update()
method is more complex. This is because Update()
handles BusinessBase
and CommandBase
subclasses differently from other objects. The specific DataPortal_XYZ
method to be invoked varies based on the base class of the business object. This complicates the process of retrieving the MethodInfo
object.
DataPortalMethodInfo method;
//
...string methodName;
if (obj is CommandBase)
methodName = "DataPortal_Execute";
else if (obj is Core.BusinessBase)
{
Core.BusinessBase tmp = (Core.BusinessBase)obj;
if (tmp.IsDeleted)
methodName = "DataPortal_DeleteSelf";
else
if (tmp.IsNew)
methodName = "DataPortal_Insert";
else
methodName = "DataPortal_Update";
}
else
methodName = "DataPortal_Update";
method = DataPortalMethodCache.GetMethodInfo(
obj.GetType(), methodName);
The same GetMethodInfo()
call is used as in Fetch()
, but the name of the method is determined based on the type and state of the business object itself. If the business object is a subclass of CommandBase
, then the method name is DataPortal_Execute
. For any other objects that don't inherit from BusinessBase
, the method name is DataPortal_Update
.
If the business object is a subclass of BusinessBase
, however, the object's state becomes important. If the object is marked for deletion, then the method name is DataPortal_DeleteSelf
. If the object is new, the name is DataPortal_Insert
; otherwise, it is DataPortal_Update
.
Once the MethodInfo
object has been retrieved, the rest of the code is essentially the same as in the other three methods.
Now let's discuss the two remaining classes that set up an appropriate transaction context.
The ServicedDataPortal
class has one job: to create a distributed COM+ transactional context within which DataPortalSelector
(and thus the business object) will run. When a call is routed through ServicedDataPortal
, a distributed transactional context is created, ensuring that the business object's DataPortal_XYZ
methods run within that context.
Normally, to run within a COM+ distributed transaction, an object must inherit from System.EnterpriseServices.ServicedComponent
. This is a problem for typical business objects, since you don't usually want them to run within COM+, and no one likes all the deployment complexity that comes with a ServicedComponent
.
ServicedDataPortal
allows business objects to avoid this complexity. It does inherit from ServicedComponent
, and it includes the appropriate Enterprise Services attributes to trigger the use of a distributed transaction. But it turns out that when a ServicedComponent
running in a transactional context calls a normal .NET object, that object also runs in the transaction. This is true even when the normal .NET object doesn't inherit from ServicedComponent
.
Figure 15-6 illustrates the use of this concept.
Once the transactional context is established by ServicedDataPortal
, all normal .NET objects invoked from that point forward run within the same context.
ServicedDataPortal
itself inherits from ServicedComponent
in the System.EnterpriseServices
namespace and includes some key attributes.
[Transaction(TransactionOption.Required)]
[EventTrackingEnabled(true)]
[ComVisible(true)]
public class ServicedDataPortal : ServicedComponent, IDataPortalServer
{
}
The Transaction
attribute specifies that this object must run within a COM+ transactional context. If it is called by another object that already established such a context, this object will run within that context; otherwise, it will create a new context.
The EventTrackingEnabled
attribute indicates that this object will interact with COM+ to enable the "spinning balls" in the component services management console. This is only important (or even visible) if the data portal is running within COM+ on the server—meaning that the EnterpriseServicesProxy
is used by the client to interact with the server.
The ComVisible
attribute makes this class visible to COM, which is a requirement for any class that is to be hosted in COM+.
Because ServicedDataPortal
inherits from ServicedComponent
, the Csla.dll
assembly itself must be configured so it can be hosted in COM+. Specifically, the assembly must be signed with a key (CslaKey.snk
in the project), and the project must have a reference to the System.EntepriseServices
assembly.
The class also implements the IDataPortalServer
interface, ensuring that it implements the four data methods. Each of these methods has another Enterprise Services attribute: AutoComplete
.
[AutoComplete(true)]
public DataPortalResult Update(
object obj, DataPortalContext context)
{
var portal = new DataPortalSelector();
return portal.Update(obj, context);
}
The AutoComplete
attribute is used to tell COM+ that this method will vote to commit the transaction unless it throws an exception. In other words, if an exception is thrown, the method will vote to roll back the transaction; otherwise, it will vote to commit the transaction.
This fits with the overall model of the data portal, which relies on the business object to throw exceptions in case of failure. The data portal uses the exception to return important information about the failure back to the client. ServicedDataPortal
also relies on the exception to tell COM+ to roll back the transaction.
Notice how the Fetch()
method simply creates an instance of DataPortalSelector
and delegates the call to that object. This is the same thing that Csla.Server.DataPortal
did for manual transactions, except in this case, DataPortalSelector
and the business object are wrapped in a distributed transactional context.
The other three data methods are implemented in the same manner.
The TransactionalDataPortal
is designed in a manner similar to ServicedDataPortal
. Rather than using Enterprise Services, however, this object uses the transactional capabilities provided by the System.Transactions
namespace, and in particular the TransactionScope
object.
This class simply implements IdataPortalServer
.
public class TransactionalDataPortal : IDataPortalServer
{
}
This ensures that it implements the four data methods. Each of these methods follows the same structure: create a TransactionScope
object and delegate the call to an instance of DataPortalSelector
. For instance, here's the Update()
method:
public DataPortalResult Update(
object obj, DataPortalContext context)
{
DataPortalResult result;
using (TransactionScope tr = new TransactionScope())
{
var portal = new DataPortalSelector();
result = portal.Update(obj, context);
tr.Complete();
}
return result;
}
The first thing this method does is create a TransactionScope
object from the System.Transactions
namespace. Just the act of instantiating such an object creates a transactional context. It is not a distributed transactional context, but rather a lighter-weight context. If the business object interacts with more than one database, however, it will automatically become a distributed transaction.
The using
block here ensures both that the TransactionScope
object will be properly disposed, and perhaps more importantly, that the transaction will be committed or rolled back as appropriate. If the object is disposed before the Complete()
method is called, then the transaction is rolled back. Again, this model relies on the underlying assumption that the business code will throw an exception to indicate failure. This is the same model that is used by ServicedDataPortal
, and really by the data portal infrastructure overall.
Within the using
block, the code creates an instance of DataPortalSelector
and delegates the call to that object, which in turn calls the business object. Assuming no exception is thrown by the business object, the Complete()
method is called to indicate that the transaction should be committed.
The other three methods are implemented in the same manner. Regardless of which transactional model is used, all calls end up being handled by a DataPortalSelector
object, which implements the message router concept.
The message router functionality picks up where the channel adapter leaves off. The channel adapter gets the client call from the client to the server, ultimately calling Csla.Server.DataPortal
. Recall that every host class (LocalPortal, WcfPortal
, etc.) ends up delegating every method call to an instance of Csla.Server.DataPortal
. That object routes the call to a DataPortalSelector
object, possibly first setting up a transactional context.
The focus in this section of the chapter will be on the DataPortalSelector, FactoryDataPortal
, and SimplePortal
classes, which route the client request to the appropriate object and method supplied by the business developer. Figure 15-7 shows the relationship between these three types.
The purpose of DataPortalSelector
is to route the call to the appropriate object, so SimpleDataPortal
or FactoryDataPortal
can call the data access method implemented by the business developer.
The DataPortalSelector
class determines if the business object has an ObjectFactory
attribute and uses that information to route the call. It calls a GetObjectFactoryAttribute()
helper method from the ObjectFactoryAttribute
class.
internal static ObjectFactoryAttribute GetObjectFactoryAttribute(
Type objectType)
{
var result = objectType.GetCustomAttributes(
typeof(ObjectFactoryAttribute), true);
if (result != null && result.Length > 0)
return result[0] as ObjectFactoryAttribute;
else
return null;
}
This is standard reflection code for retrieving an attribute from a type.
The result is used in DataPortalSelector
to route the call to the correct object. For example, here's the Fetch()
method:
public DataPortalResult Fetch(
Type objectType, object criteria, DataPortalContext context)
{
try
{
context.FactoryInfo =
ObjectFactoryAttribute.GetObjectFactoryAttribute(objectType);
if (context.FactoryInfo == null)
{
var dp = new SimpleDataPortal();
return dp.Fetch(objectType, criteria, context);
}
else
{
var dp = new FactoryDataPortal();
return dp.Fetch(objectType, criteria, context);
}
}
catch (DataPortalException)
{
throw;
}
catch (Exception ex)
{
throw new DataPortalException(
"DataPortal.Fetch " + Resources.FailedOnServer
,ex, new DataPortalResult());
}
}
If an ObjectFactory
attribute is present, the call is routed to a FactoryDataPortal
; otherwise, it goes to a SimpleDataPortal
.
The exception handling is perhaps the most interesting part of the code. It is possible for code right here in DataPortalSelector
to fail, so that must be handled by wrapping such an exception in a DataPortalException
. However, it is also possible that FactoryDataPortal
or SimpleDataPortal
could have already caught and wrapped an exception into a DataPortalException
. To avoid double-wrapping of such exceptions, any DataPortalException
is simply rethrown.
If no ObjectFactory
attribute is used, all client calls end up being handled on the server by an instance of SimpleDataPortal
. This class is the counterpart to the client-side Csla.DataPortal
, since it is this class that interacts directly with the business objects designed by the business developer.
SimpleDataPortal
implements the four data methods defined by IDataPortalServer
: Create(), Fetch(), Update()
, and Delete()
. Each of these methods follows the same basic processing flow:
Create or get an instance of the business object.
Call the object's DataPortal_OnDataPortalInvoke()
method (if implemented).
Set the object's status (new, dirty, etc.) as appropriate.
Call the appropriate DataPortal_XYZ
method on the object.
Call the object's DataPortal_OnDataPortalInvokeComplete()
method (if implemented).
In case of exception, call the object's DataPortal_OnDataPortalException()
method (if implemented) and throw a Csla.Server.DataPortalException
.
Return the resulting business object (if appropriate).
As I've done previously in the chapter, I'll walk through the Fetch()
method and also discuss the Update()
method, as they are representative of the overall process.
The Fetch Method
The Fetch()
method illustrates every step in the preceding list.
public DataPortalResult Fetch(
Type objectType, object criteria, DataPortalContext context)
{
LateBoundObject obj = null;
IDataPortalTarget target = null;
var eventArgs = new DataPortalEventArgs(
context, objectType, DataPortalOperations.Fetch);
try
{
// create an instance of the business object
.obj = new LateBoundObject(objectType);
target = obj.Instance as IDataPortalTarget;
if (target != null)
{
target.DataPortal_OnDataPortalInvoke(eventArgs);
target.MarkOld();
}
else
{
obj.CallMethodIfImplemented("DataPortal_OnDataPortalInvoke", eventArgs);
obj.CallMethodIfImplemented("MarkOld");
}
// tell the business object to fetch its data
if (criteria is int)
obj.CallMethod("DataPortal_Fetch");
else
obj.CallMethod("DataPortal_Fetch", criteria);
if (target != null)
target.DataPortal_OnDataPortalInvokeComplete(eventArgs);
else
obj.CallMethodIfImplemented(
"DataPortal_OnDataPortalInvokeComplete"
,eventArgs);
// return the populated business object as a result
return new DataPortalResult(obj.Instance);
}
catch (Exception ex)
{
try
{
if (target != null)
target.DataPortal_OnDataPortalException(eventArgs, ex);
else
obj.CallMethodIfImplemented(
"DataPortal_OnDataPortalException"
,eventArgs, ex);
}
catch
{
// ignore exceptions from the exception handler
}
object outval = null;
if (obj != null) outval = obj.Instance;
throw new DataPortalException(
"DataPortal.Fetch " + Resources.FailedOnServer
,ex, new DataPortalResult(outval));
}
}
The first step is to create an instance of the business object. This is done using a feature of the LateBoundObject
type from the Csla.Reflection
namespace. LateBoundObject
uses dynamic method invocation to create an instance of the business object, even if the business class has a non-public
default constructor.
obj = new LateBoundObject(objectType);
The constructors on business classes are not normally public. They are either private
or protected
, thus forcing the UI developer to use the factory methods to create or retrieve business objects.
If it is not already loaded into memory, the .NET runtime will automatically load the assembly containing the business object class.
To ensure .NET can find your business assembly, the assembly must be in the same directory as the client application's exe
file, in the Bin
directory. Alternatively, you may install the assembly into the .NET GAC. A more advanced solution is to handle the current AppDomain
object's AssemblyResolve
event, where you would load the desired assembly and provide it to the .NET runtime.
The objectType
parameter is passed from the client. Recall that in Csla.DataPortal
, the type of the object to be created was determined and passed as a parameter to the Fetch()
method.
The next step in the process is to tell the business object that it is about to be invoked by the data portal. To minimize the use of reflection, an attempt is made to cast the object to the IDataPortalTarget
interface. Technically, the data portal can work with nearly any Serializable
.NET object, even if the object isn't based on a CSLA .NET base class. In such a case, the object might not implement IDataPortalTarget
, so the data portal doesn't assume that all objects will implement the interface.
If the cast succeeds, the interface is used to invoke the method; otherwise, MethodCaller
is used to invoke the method by name.
target = obj.Instance as IDataPortalTarget; if (target != null) { target.DataPortal_OnDataPortalInvoke(eventArgs); target.MarkOld(); } else { obj.CallMethodIfImplemented("DataPortal_OnDataPortalInvoke", eventArgs); obj.CallMethodIfImplemented("MarkOld"); }
The IDataPortalTarget
interface is scoped as internal to avoid exposing these methods to code outside the data portal. This interface is implemented by the CSLA .NET base classes, so reflection is not used when interacting with most business objects.
A business developer can override the DataPortal_OnDataPortalInvoke()
method to do any preprocessing prior to an actual DataPortal_XYZ
method being called.
Also notice that the object's status is updated with a call to MarkOld()
. Because Fetch()
is retrieving already existing data from the database, the object will meet the definition of being "old" when this method completes. Object status values were discussed in Chapter 8.
The next step is to call the actual DataPortal_XYZ
method.
if (criteria is int) obj.CallMethod("DataPortal_Fetch"); else obj.CallMethod("DataPortal_Fetch", criteria);
Remember that the obj
field is a LateBoundObject
that wraps the business object. This means when CallMethod()
is used, the business object's method is invoked using a dynamic delegate.
A criteria object must be some actual object, not a primitive type. But Csla.DataPortal
has overloads for Fetch()
that don't require any criteria parameter. No parameter at all is not the same as null
, so the data portal uses a "magic value" to indicate whether no parameter was passed. If an int
value is provided as the criteria, that indicates that no value was supplied, not even null
.
Consider the following code:
result = DataPortal.Fetch(); result = DataPortal.Fetch(null); result = DataPortal.Fetch(new SingleCriteria<CustomerEdit, int>(123));
The first call should route to
private void DataPortal_Fetch()
The second two expect a method that accepts one parameter. Perhaps this:
private void DataPortal_Fetch(SingleCriteria<CustomerEdit, int> criteria)
Behind the scenes, the data portal uses an int
value to specify that no parameter was provided, so the parameterless call to Fetch()
is routed correctly.
Now that the DataPortal_Fetch()
method has been invoked, the object is notified that the data portal processing is complete.
if (target != null) target.DataPortal_OnDataPortalInvokeComplete(eventArgs); else obj.CallMethodIfImplemented( "DataPortal_OnDataPortalInvokeComplete", eventArgs);
Finally, the newly created object is wrapped in a Csla.Server.DataPortalResult object and returned.
return new DataPortalResult(obj.Instance);
Again, remember that obj
is a LateBoundObject
, so its Instance
property is used to retrieve the actual business object it contains.
That concludes the normal sequence of events in the method. Of course, it is possible that an exception occurred during the processing. In that case, the exception is caught and the object is notified that an exception occurred.
try { if (target != null) target.DataPortal_OnDataPortalException(eventArgs, ex); else obj.CallMethodIfImplemented( "DataPortal_OnDataPortalException", eventArgs, ex); } catch { // ignore exceptions from the exception handler }
This optional call to DataPortal_OnDataPortalException()
is wrapped in its own try...catch
statement. Even if an exception occurs while calling this method, the code needs to continue. There's little that can be done if the exception-handling code has an exception, so such an exception is simply ignored.
In any case, the exception is wrapped in a Csla.Server.DataPortalException
, which is thrown back to Csla.DataPortal
.
object outval = null; if (obj != null) outval = obj.Instance; throw new DataPortalException( "DataPortal.Fetch " + Resources.FailedOnServer, ex, new DataPortalResult(outval));
Remember that DataPortalException
contains the original exception as an InnerException
, and also traps the stack trace from the server exception so it is available on the client. Also keep in mind that all the proxy/host channel implementations ensure that the exception is returned to the client with full fidelity, so Csla.DataPortal
gets the full exception detail regardless of the network channel used.
At this point, you should understand how the flow of the data methods is implemented. The remaining methods the same flow with minor variations, with the Update()
method being the most unique.
The Update Method
The Update()
method is more complex. Remember that the Update()
process adapts itself to the type of business object being updated, so it checks to see if the object is a subclass of BusinessBase
or CommandBase
and behaves appropriately. Also recall that the actual business object is passed as a parameter to Update()
, so this method doesn't need to create an instance of the business object at all.
Processing a BusinessBase Object
It starts right out by attempting to cast the business object as BusinessBase
. If the cast succeeds, the resulting BusinessBase
field is used to check the object's status. Also notice the use of the lb
field. This field is a LateBoundObject
instance that wraps the business object, simplifying dynamic calls to properties and methods.
var busObj = obj as Core.BusinessBase;
if (busObj != null)
{
if (busObj.IsDeleted)
{
if (!busObj.IsNew)
{
// tell the object to delete itself
lb.CallMethod("DataPortal_DeleteSelf");
}
if (target != null)
target.MarkNew();
else
lb.CallMethodIfImplemented("MarkNew");
}
else
{
if (busObj.IsNew)
{
// tell the object to insert itself
lb.CallMethod("DataPortal_Insert");
}
else
{
// tell the object to update itself
lb.CallMethod("DataPortal_Update");
}
if (target != null)
target.MarkOld();
else
lb.CallMethodIfImplemented("MarkOld");
}
}
If the object's IsDeleted
property returns true
, then the object should be deleted. It is possible that the object is also new, in which case there's actually nothing to delete; otherwise, the DataPortal_DeleteSelf()
method is invoked.
In either case, the MarkNew()
method is invoked to reset the object's state to new and dirty. From Chapter 8, the definition of a "new" object is that its primary key value isn't in the database, and since that data was just deleted, the object certainly meets that criteria. The definition of a "dirty" object is that its data values don't match values in the database, and again, the object now certainly meets that criteria as well.
If the object wasn't marked for deletion, then it will need to be either inserted or updated. If IsNew
is true, then DataPortal_Insert()
is invoked. Similarly, if the object isn't new, then DataPortal_Update()
is invoked. In either case, the object's primary key and data values now reflect values in the database, so the object is clearly not new or dirty. The MarkOld()
method is called to set the object's state accordingly.
Processing a CommandBase Object
If the business object inherits from Csla.CommandBase, things are simpler. In this case, only the object's DataPortal_Execute()
method is invoked:
else if (obj is CommandBase)
{
operation = DataPortalOperations.Execute;
// tell the object to update itself
lb.CallMethod("DataPortal_Execute");
}
A command object should implement all server-side code in its DataPortal_Execute()
method.
Processing All Other Objects
For any other objects (most commonly subclasses of BusinessListBase
), the DataPortal_Update()
method is invoked, followed by an optional call to MarkOld()
.
lb.CallMethod("DataPortal_Update");
if (target != null)
target.MarkOld();
else
lb.CallMethodIfImplemented("MarkOld");
As in Fetch()
, the DataPortal_OnDataPortalInvoke()
method is called before any of this other processing, and DataPortal_OnDataPortalInvokeComplete()
is called once it is all done. The business object is returned as a result, wrapped in a DataPortalResult
object. Any exceptions are handled in the same way as in Fetch()
.
That completes the SimpleDataPortal
class. Notice how all client calls are automatically routed to a dynamically created business object based on the type of business object required. SimpleDataPortal
is entirely unaware of the particulars of any business application; it blindly routes client calls to the appropriate destinations.
The FactoryDataPortal
is invoked when the business object has an ObjectFactory
attribute. In some ways, this is the simpler of the two classes, because it pushes more of the work on the author of the factory object. Though this means the business developer does more work, it also means the business developer has more flexibility.
Earlier in the chapter, the DataPortalSelector
code called a static property named FactoryLoader
. The FactoryLoader
property uses the CslaObjectFactoryLoader
config setting from the appSettings
element to create an object that can load factory objects.
public static IObjectFactoryLoader FactoryLoader
{
get
{
if (_factoryLoader == null)
{
string setting =
ConfigurationManager.AppSettings["CslaObjectFactoryLoader"];
if (!string.IsNullOrEmpty(setting))
_factoryLoader =
(IObjectFactoryLoader)Activator.CreateInstance(
Type.GetType(setting, true, true));
else
_factoryLoader = new ObjectFactoryLoader();
}
return _factoryLoader;
}
set
{
_factoryLoader = value;
}
}
This approach provides a level of indirection that can be useful for swapping out data access layer implementations. CSLA .NET provides a default factory loader, ObjectFactoryLoader
, which interprets the ObjectFactory
attribute's factory name parameter as an assembly qualified type name—for example:
[ObjectFactory("Namespace.TypeName, Assembly")]
The default ObjectFactoryLoader
uses this text value to create an instance of the specified type. However, a business developer could create her own factory loader by implementing IObjectFactoryLoader
from the Csla.Server
namespace and telling CSLA .NET to use her factory loader by setting the CslaObjectFactoryLoader
config value. Her custom factory loader could interpret the ObjectFactory
parameter value in any way, such as looking up the type name from an XML file.
In any case, the FactoryLoader
property returns an IObjectFactoryLoader
, which must implement the two methods listed in Table 15-16.
Table 15.16. Methods Defined by IObjectFactoryLoader
Method | Description |
---|---|
| Returns a |
| Returns an instance of the factory object to be used |
The other data portal classes you've seen in this chapter use the GetFactoryType()
method to get type information about the factory object when necessary. In the end, though, it is the FactoryDataPortal
itself that calls GetFactory()
to create an instance of the factory object. That occurs within the various data portal methods.
As with the other data portal types, FactoryDataPortal
implements the Create(), Fetch(), Update()
, and Delete()
methods. These methods are all implemented using the same process:
Invoke the specified factory method.
Wrap any exceptions in a DataPortalException
.
Return any result.
The Fetch Method
Since all the methods are the same, I'll only walk through the Fetch()
method.
public DataPortalResult Fetch(
Type objectType, object criteria, DataPortalContext context)
{
try
{
DataPortalResult result = null;
if (criteria is int)
result = InvokeMethod(
context.FactoryInfo.FactoryTypeName
,context.FactoryInfo.FetchMethodName
,context);
else
result = InvokeMethod(
context.FactoryInfo.FactoryTypeName
,context.FactoryInfo.FetchMethodName
,criteria, context);
return result;
}
catch (Exception ex)
{
throw new DataPortalException(
context.FactoryInfo.FetchMethodName + " " + Resources.FailedOnServer
,ex, new DataPortalResult());
}
}
The InvokeMethod()
helper is used to make the actual method call on the factory object. There are two overloads: one with a criteria parameter and one without. Again, the "magic type" of int
is used to indicate that no criteria value was provided by the client code.
Any exception is wrapped into a DataPortalException
and thrown back to the caller.
The InvokeMethod Method
The InvokeMethod()
method follows these steps:
Create an instance of the factory object.
Call an Invoke()
method (if present).
Call the data access method.
Call an InvokeComplete()
method (if present).
If an exception occurs, call an InvokeError()
method (if present).
Return the result.
The optional Invoke(), InvokeComplete()
, and InvokeError()
methods may be implemented by a factory object if the factory wants to do pre- or post-processing.
Here's the InvokeMethod()
code:
private DataPortalResult InvokeMethod(
string factoryTypeName, string methodName
,object e, DataPortalContext context)
{
object factory = FactoryLoader.GetFactory(factoryTypeName);
Csla.Reflection.MethodCaller.CallMethodIfImplemented(
factory, "Invoke", context);
object result = null;
try
{
result = Csla.Reflection.MethodCaller.CallMethod(
factory, methodName, e);
Csla.Reflection.MethodCaller.CallMethodIfImplemented(
factory, "InvokeComplete", context);
}
catch (Exception ex)
{
Csla.Reflection.MethodCaller.CallMethodIfImplemented(
factory, "InvokeError", ex);
throw;
}
return new DataPortalResult(result);
}
You can see how each step is implemented by using the MethodCaller
component from Csla.Reflection
to dynamically invoke the various methods. Because the names of the create, fetch, update, and delete methods can be specified as parameters to the ObjectFactory
attribute, the actual method invocation uses the supplied name.
result = Csla.Reflection.MethodCaller.CallMethod( factory, methodName, e);
This particular overload of InvokeMethod()
provides a criteria parameter to the method. The other overload does the same work, but doesn't provide a criteria parameter.
result = Csla.Reflection.MethodCaller.CallMethod( factory, methodName);
In both overloads, the steps are the same, and the result is that the appropriate data access method is invoked on the factory object.
Whether the business class has the ObjectFactory
attribute or not, the result of a call to DataPortalSelector
is that a business object is created, retrieved, updated, or deleted. The result, or resulting DataPortalException
, is returned to Csla.Server.DataPortal
and from there to the client's Csla.DataPortal
and to the client code itself.
Earlier in the chapter, I discussed how the data portal supports creating, retrieving, and updating child objects. The Csla.DataPortal
class exposes a set of methods that a business class can use to trigger these behaviors, but those methods ultimately delegate to the ChildDataPortal
class where the real work occurs.
The ChildDataPortal
class implements the methods listed in Table 15-17.
Table 15.17. Methods Implemented by ChildDataPortal
Method | Description |
---|---|
| Creates an instance of the child object and invokes its |
| Creates an instance of the child object and invokes its |
| Invokes |
The overall behavior of these methods is similar to how the SimpleDataPortal
works with root objects, in that the ChildDataPortal
is responsible for calling predefined methods on the child object. There are even pre- and post-processing methods so the business object developer can be notified before and after the data access method has been invoked.
The Create()
and Fetch()
methods are virtually identical. The Update()
method does the same things, but as with the other Update()
methods you've seen, it uses the child object's IsNew and IsDeleted properties to decide which Child_XYZ
method to invoke.
Because these methods are so similar to those in SimpleDataPortal
, I won't repeat the code here. You should understand that each method follows a set of steps:
Create or get an instance of the business object.
Call the object's Child_OnDataPortalInvoke()
method (if implemented).
Set the object's status (new, dirty, etc.) as appropriate.
Call the appropriate Child_XYZ
method on the object.
Call the object's Child_OnDataPortalInvokeComplete()
method (if implemented).
In case of exception, call the object's Child_OnDataPortalException()
method (if implemented) and throw a Csla.DataPortalException
.
Return the resulting business object (if appropriate).
As you can see, these are the same steps followed by the SimpleDataPortal
methods, except Child_XYZ
methods are called instead of DataPortal_XYZ
methods. The same dynamic delegate scheme is used, which means LateBoundObject
and MethodCaller
are leveraged to call the methods dynamically.
By supporting child objects, the data portal enables the same coding model for both root and child objects, which makes it easier to build and maintain business objects.
The final major area of functionality provided by the data portal is that it manages context information to provide a level of location transparency between the client and server. Specifically, it allows the business application to pass data from the client to the server and from the server to the client on each data portal call, in addition to the actual call itself. The data portal uses this capability itself to pass security and culture information from the client to the server.
You've already seen most of the code that implements the context-passing behaviors. Csla.DataPortal
is responsible for passing the client context to the server and for updating the client's context with any changes from the server. Csla.Server.DataPortal
is responsible for setting the server's context based on the data passed from the client, and for returning the global context from the server back to the client.
To maintain the context and pass it between client and server, several objects are used. Let's discuss them now.
Earlier in the chapter, you saw how the Csla.DataPortal
class implements static methods used by business developers to interact with the data portal. Each of those methods dealt with context data—creating a DataPortalContext
object to pass to the server. On the server, Csla.Server.DataPortal
uses the data in DataPortalContext
to set up the server's context to match the client.
Of course, the phrase "on the server" is relative, since the data portal could be configured to use the LocalProxy
. In that case, the "server-side" components actually run in the same process as your client code. Obviously, the context data is already present in that case, so there's no need to transfer it; and the data portal includes code to short-circuit the process when the server-side data portal components are running locally.
The DataPortalContext
object is created and initialized in Csla.DataPortal
within each data method.
dpContext = new Server.DataPortalContext(GetPrincipal(), proxy.IsServerRemote);
The DataPortalContext
object is a container for the set of context data to be passed from the client to the server. The data it contains is defined by the fields declared in DataPortalContext
.
private IPrincipal _principal;private bool _remotePortal;
private string _clientCulture;
private string _clientUICulture;
private ContextDictionary _clientContext;
private ContextDictionary _globalContext;
The ContextDictionary
type is a special dictionary type that is compatible with CSLA .NET for Silverlight. CSLA .NET for Silverlight is outside the scope of this book, but you should understand that ContextDictionary
is a subclass of HybridDictionary
, enhanced so it can be serialized to and from Silverlight.
These data elements were described in Table 15-5 earlier in the chapter. The key here is that DataPortalContext
is marked as Serializable
, and therefore when it is serialized, all the values in these fields are also serialized.
The values are loaded when the DataPortalContext
object is created. The two culture values are pulled directly off the client's current Thread
object. The _clientContext
and _globalContext
values are set based on the values in Csla.ApplicationContext
.
Each of the values is exposed through a corresponding property so they can be used to set up the context data on the server.
Setting the Server Context
The server's context is set by Csla.Server.DataPortal as the first step in each of the four data methods. The work is handled by the SetContext()
method in Csla.Server.DataPortal
. This method follows this basic flow:
Do nothing if the "server" code is running on the client; otherwise, call ApplicationContext
to set the client and global context collections.
Set the server Thread
to use the client's culture settings.
If using Windows authentication, set the AppDomain
to use the WindowsPrincipal
.
If using custom authentication, set the server Thread
to use the IPrincipal
supplied from the client.
Let's walk through the code in SetContext()
that implements these steps. First is the check to see if the "server" code is actually running locally in the client process (using the LocalProxy
in the channel adapter).
if (!context.IsRemotePortal) return;
If the server code is running locally, then there's no sense setting any context data, because it is already set up. If the server code really is running remotely, though, the context data does need to be set up on the server, starting by restoring the client and global context data.
ApplicationContext.SetContext(
context.ClientContext, context.GlobalContext);
Remember that the client context comes from the client to the server only, while the global context will ultimately be returned to the client, reflecting any changes made on the server. The ApplicationContext
also has an ExecutionLocation
property that business code can use to determine whether the code is currently executing on the client or the server. This value must be set to indicate that execution is on the server.
ApplicationContext.SetExecutionLocation(
ApplicationContext.ExecutionLocations.Server);
Like the client context, the two culture values flow from the client to the server. They are used to set the current Thread
object on the server to match the client settings.
System.Threading.Thread.CurrentThread.CurrentCulture =
new System.Globalization.CultureInfo(context.ClientCulture);
System.Threading.Thread.CurrentThread.CurrentUICulture =
new System.Globalization.CultureInfo(context.ClientUICulture);
Of the two, perhaps the most important is the CurrentUICulture
, as this is the setting that dictates the language used when retrieving resource values such as those used throughout the CSLA .NET framework.
Finally, if custom authentication is being used, the IPrincipal
object representing the user's identity is passed from the client to the server. It must be set on the current Thread or HttpContext
as the CurrentPrincipal
or User
to effectively impersonate the user on the server. Csla.ApplicationContext
handles this.
if (ApplicationContext.AuthenticationType == "Windows")
{
// When using integrated security, Principal must be null
if (context.Principal != null)
{
System.Security.SecurityException ex =
new System.Security.SecurityException(
Resources.NoPrincipalAllowedException);
ex.Action = System.Security.Permissions.SecurityAction.Deny;
throw ex;
}
// Set .NET to use integrated security
AppDomain.CurrentDomain.SetPrincipalPolicy(
PrincipalPolicy.WindowsPrincipal);
}
else
{
// We expect some Principal object
if (context.Principal == null)
{
System.Security.SecurityException ex =
new System.Security.SecurityException(
Resources.BusinessPrincipalException + " Nothing");
ex.Action = System.Security.Permissions.SecurityAction.Deny;
throw ex;
}
ApplicationContext.User = context.Principal;
There's a lot going on here, so let's break it down a bit. First, there's the check to ensure that custom authentication is being used. If Windows integrated (AD) security is being used, then Windows itself handles any impersonation, based on the configuration of the host (IIS, WAS, COM+, etc). In that case, the IPrincipal
value passed from the client must be null
; otherwise, it will be invalid, and the code will throw an exception.
The check of the principal object's type is done to ensure that both the client and the server are using the same authentication scheme. If the client is using custom authentication and the server is using Windows integrated security, this exception will be thrown. Custom authentication was discussed in Chapter 12.
If the server is configured to use custom authentication, however, the rest of the code is executed. In that case, the first step is to make sure that the client did pass a valid IPrincipal
object to the server. "Valid" in this case means that it isn't null
. Given a valid IPrincipal
object, the server's principal value is set to match that of the client. An invalid IPrincipal
value results in an exception being thrown.
Remember that an IAuthorizeDataPortal
implementation can be provided to further authorize each data portal request, as I discussed earlier in this chapter.
Clearing the Server Context
Once all the server-side processing is complete, the server clears the context values on its Thread
object. This is done to prevent other code from accidentally gaining access to the client's context or security information. Csla.Server.DataPortal
handles this in its ClearContext()
method.
private static void ClearContext(DataPortalContext context)
{
// if the dataportal is not remote then
// do nothing
if (!context.IsRemotePortal) return;
ApplicationContext.Clear();
if (ApplicationContext.AuthenticationType != "Windows")
ApplicationContext.User = null;
}
This method is called at the end of each data method in Csla.Server.DataPortal
. Notice that it calls Csla.ApplicationContext
to clear the client and global context values. Then if custom authentication is being used, Csla.ApplicationContext
is called to set the principal value to null
, removing the IPrincipal
value from the server thread.
Using the DataPortalContext
object, Csla.DataPortal
and Csla.Server.DataPortal
convey client context data to the server. That's great for the client context, client culture, and client IPrincipal
, but the global context data needs to be returned to the client when the server is done. This is handled by Csla.Server.DataPortalResult
on a successful call, and Csla.Server.DataPortalException
in the case of a server-side exception.
The Csla.Server.DataPortalResult
object is primarily responsible for returning the business object that was created, retrieved, or updated on the server back to the client. However, it also contains the global context collection from the server.
When the DataPortalResult
object is created by FactoryDataPortal
or SimpleDataPortal
, it automatically pulls the global context data from ApplicationContext
.
public DataPortalResult(object returnObject)
{
_returnObject = returnObject;
_globalContext = ApplicationContext.GetGlobalContext();
}
This way, the global context data is carried back to the client along with the business object.
Where Csla.Server.DataPortalResult
returns the business object and context to the client for a successful server-side operation, Csla.Server.DataPortalException
returns that data in the case of a server-side exception. Obviously, the primary responsibility of DataPortalException
is to return the details about the exception, including the server-side stack trace, back to the client. This information is captured when the exception is created.
public DataPortalException(
string message, Exception ex, DataPortalResult result)
: base(message, ex)
{
_innerStackTrace = ex.StackTrace;
_result = result;
}
Notice that a DataPortalResult
object is required as a parameter to the constructor. This DataPortalResult
object is returned to the client as part of the exception, thus ensuring that both the business object (exactly as it was when the exception occurred) and the global context from the server are returned to the client as well.
At this point, you have walked through all the various types and classes used to implement the core mobile object and data access functionality in the framework.
This chapter has walked through the various types and classes in the framework that enable both mobile objects and data access. The details of mobile objects are managed by a concept called the data portal. You should understand that the data portal incorporates several areas of functionality:
Logical separation of business and data access layers
Consistent coding model for root and child objects
Channel adapter design pattern
Flexible distributed transactional support
Message router design pattern
Context and location transparency
The channel adapter provides for flexibility in terms of how (or if) the client communicates with an application server to run server-side code. The distributed transactional support abstracts the use of Enterprise Services or System.Transactions
. The message router handles the routing of client calls to your business components on the server, minimizing the coupling between client and server by enabling a single point of entry to the server. Behind the scenes, the data portal provides transparent context flow from the client to the server and back to the client. This includes implementing impersonation when using custom authentication.
At this point, the discussion of the CSLA .NET framework implementation is nearly complete. Chapter 16 will cover various other features of the framework, and then Chapters 17 onward will discuss how to use the framework to build applications.