Chapter 18. Example Data Access

In Chapter 17, I walked through the business objects for the ProjectTracker application from Chapter 3. The focus in Chapter 17 is on how the objects implement properties, along with business, validation, and authorization rules. In this chapter, I discuss how those same objects implement persistence and data access, focusing on how their factory methods use the CSLA .NET data portal and how data is retrieved and updated into the database.

In this chapter, I don't walk through all the code in the ProjectTracker business object library. Instead, I focus on providing examples of how to implement common types of business objects and how to establish various object relationships. For the complete code, refer to the code download for this book, available at www.lhotka.net/cslanet or www.apress.com/book/view/1430210192.

Before digging into the ProjectTracker code, I will spend some time discussing some of the options available in terms of designing a data access layer and how your business objects and the data portal can interact with that layer. The ProjectTracker application uses just one technique, but there are several techniques to choose from.

Data Access Layer Design

As I discuss in Chapter 1, it is very important that any application use logical layering to organize its code. The most important area to avoid is mixing UI and business logic, keeping UI code out of the business layer and business logic out of the UI layer. A close second, in terms of importance, is to avoid mixing business and data access logic.

Business logic should exist independently of the data storage or data access concepts. In other words, your application's business logic should be the same regardless of whether the data is stored in SQL Server or an XML file. The logic should not change just because you use ADO.NET or LINQ to SQL to interact with the data store.

Similarly, your choice of data storage technology or data access technique shouldn't be affected when you change your business logic. Changing the way you calculate, validate, or authorize your object properties shouldn't require a switch from using a DataSet to using the ADO.NET Entity Framework.

Achieving this level of separation is best done by logically layering your application so you have a formal business layer that is separate from your data access layer. This allows either layer to change (within reason) without affecting the other layer.

CSLA .NET supports several different models, each with good and bad points. I'll discuss the models first and then the reasoning you might use to choose between them.

Data Access Models

The CSLA .NET data portal supports two models for business object persistence, which means there are two basic approaches for interacting with a data access layer:

  • The data portal calls the business object's DataPortal_XYZ methods, which call the data access layer.

  • The data portal calls an object factory object, which is either the data access layer or which calls the data access layer.

Each technique can be used in several different ways. Table 18-1 lists the most common designs for the data access layer.

Table 18.1. Common Data Access Layer Designs

Data Portal Mode

Data Access Model

Description

DataPortal_XYZ

Embedded in object

The data access code is embedded directly in the DataPortal_XYZ methods in the business object.

DataPortal_XYZ

In separate assembly

The data access code is in a separate assembly, which is invoked by code in the DataPortal_XYZ methods.

ObjectFactory

Data access in factory

The data portal invokes a factory object, which is the data access layer.

ObjectFactory

Data access in separate assembly

The data portal invokes a factory object, which then invokes the data access layer which is in a separate assembly.

I'll give a brief description of each technique and then follow up with a discussion around the good and bad points of each.

Using the DataPortal_XYZ Methods

The traditional way to use the data portal is to allow the data portal to invoke DataPortal_XYZ methods on the business object. This means the business object is in charge of its own persistence. It can directly encapsulate the data access code or it can invoke some external data access object, but the business object is in control.

This is a simple, powerful model that typically results in the simplest code. The technique works with nearly any data access technology, including the following:

  • ADO.NET Connection, Command, DataReader objects

  • LINQ to SQL

  • ADO.NET Entity Framework

  • Remote XML services

  • XML data files (or other text files)

Because the data portal helps manage the state of the business objects, the code in the DataPortal_XYZ and Child_XYZ methods can be very focused on interacting with the data source or data access layer and getting or setting the business object's fields.

Data Access in DataPortal_XYZ Methods

The simplest model is to put the data access code directly into the DataPortal_XYZ methods. When coupled with direct use of ADO.NET Command and DataReader objects this model also offers superior performance. However, this model does put the data access code directly in the business class, which somewhat blurs the separation of business and data access layers. I think there's still pretty good logical separation because the data access code is encapsulated in clearly defined DataPortal_XYZ methods, but the separation isn't as clean as having the data access code in totally separate objects or assemblies.

An example of this model could look like this DataPortal_Fetch() method:

private void DataPortal_Fetch(SingleCriteria<CustomerEdit, int> criteria)
  {
    using (var ctx = ConnectionManager<SqlConnection>.GetManager("MyDb"))
    {
      using (var cm = ctx.Connection.CreateCommand())
      {
        cm.CommandType = CommandTypes.Text;
        cm.CommandText = "SELECT Id, Name FROM Customer WHERE id=@id";
        using (var dr = new SafeDataReader(cm.ExecuteReader()))
        {
          dr.Read();
          LoadProperty(IdProperty, dr.GetInt32("Id"));
          LoadProperty(NameProperty, dr.GetString("Name"));
        }
      }
    }
  }

The code is clean and concise, and performance is about as good as you can get because the DataReader is the lowest level object used to retrieve data from a database.

It is important to realize that this technique can be used with LINQ to SQL, the ADO.NET Entity Framework, or virtually any other data access technology. The basic approach is the same in all cases: get the data from the database and put it into the object's fields, or take the data from the object's fields and put it into the database.

DataPortal_XYZ Methods Invoke Separate Data Access

A slightly more complex model is to separate the data access code into an external object, which might be in a separate assembly that is referenced by the business assembly. This approach provides slightly better separation between layers, though it does add some complexity.

In this case, you must give careful thought to the interface that will be exposed by your data access layer because that is the interface that will be invoked by the business object. The way you design your data access layer's interface is often dictated by several factors:

  • The data access technology being used (ADO.NET, LINQ to SQL, etc.)

  • The need to change from one data access technology to another (e.g., from LINQ to SQL to the ADO.NET Entity Framework)

  • The need to change from one data storage technology to another (e.g., from Oracle to SQL Server)

Detailed exploration of all these design factors is outside the scope of this book, but you should employ standard concepts such as loose coupling, interfaces, polymorphism, and abstraction to accomplish these goals.

A DataPortal_Fetch() method, where the data access layer is external but still uses ADO.NET, might look like this:

private void DataPortal_Fetch(SingleCriteria<CustomerEdit, int> criteria)
  {
    using (var dal = new CustomerDal())
    {
      using (SafeDataReader dr = dal.GetCustomer(criteria.Value))
      {
          dr.Read();
          LoadProperty(IdProperty, dr.GetInt32("Id"));
          LoadProperty(NameProperty, dr.GetString("Name"));
      }
    }
  }

Notice that the LoadProperty() method calls are the same as in the previous example. The primary difference is that this code doesn't open the database nor does it set up or call the Command object. The interaction with the database is abstracted away into the CustomerDal object and its GetCustomer() method.

The CustomerDal object is very likely in a separate assembly that is referenced by the business assembly. That data access assembly only needs to be deployed to the application server (if the data portal is configured to use a remote application server), so the code that interacts with the database is not deployed to the client. Arguably, this increases the security of the application to some degree.

The CustomerDal class might look like this:

public class CustomerDal : IDisposable
  {
    private SqlConnection _cn;
    private SqlCommand _cm;

    public CustomerDal()
    {
      var conn = System.Configuration.ConfigurationManager.
                        ConnectionStrings["MyDb"].ConnectionString;
      _cn = new SqlConnection(conn);
      _cn.Open();
    }

    public SafeDataReader GetCustomer(int id)
    {
      _cm = _cn.CreateCommand();
      _cm.CommandType = CommandTypes.Text;
      _cm.CommandText = "SELECT Id, Name FROM Customer WHERE id=@id";
      return new SafeDataReader(_cm.ExecuteReader());
    }

    public void Dispose()
    {
      if (_cm != null)
        _cm.Dispose();
      if (_cn != null)
        _cn.Dispose();
    }
  }

There are many ways you might choose to design your data access objects; this is just one example. But this example illustrates some important ideas. Your data access objects must encapsulate database interaction and must safely dispose of or close all connection, command, context, or other data access objects. They must define a set of public methods that can be invoked by the business object, accepting parameters and returning types that are available to both the business and data access assemblies.

As you'll see in the ProjectTracker code later in this chapter, LINQ to SQL is a very nice technology for creating this sort of data access assembly without having to write any code by hand. The ADO.NET Entity Framework offers the same simplicity but with more powerful mapping between the entity objects and the database schema.

However, both technologies incur some overhead when compared to the use of raw ADO.NET shown here, so you need to choose whether you value simplicity and autogenerated code or maximum performance.

Using an Object Factory

A business object can be decorated with an ObjectFactory attribute, which causes the data portal to create an instance of the factory object and to invoke methods on the factory object rather than on the business object. I discuss this capability in Chapter 15.

Using the ObjectFactory technique essentially reverses the responsibility of the objects, when compared to the DataPortal_XYZ approach. If you use an ObjectFactory attribute, the factory object assumes full responsibility for creating the business object, getting or setting it with data, and managing the object's state (such as IsNew and IsDirty). The data portal abdicates all these responsibilities to the factory object.

Typically, the factory objects are in a different assembly, separate from the business assembly. In some cases, the factory assembly directly contains the data access code, while in other cases it may just coordinate the process, invoking yet another assembly that contains the data access code.

The factory assembly can reference the business assembly and can have access to the business object types. This is the direct opposite of the DataPortal_XYZ model, where the business assembly references the data access assembly.

This means that you need to come up with some way for the factory object to load the business object's fields with data, ideally without breaking encapsulation. Such a requirement is very challenging to say the least. Usually factory objects do break encapsulation, using reflection or dynamic method invocation to cheat and interact with the business object's internal state directly.

It is better in terms of performance and object-oriented design to avoid cheating, which means that your business objects need to expose some public interface that can be used by the factory object. Sadly, such a public interface can be used by the UI or other objects as well, and that can be a problem. A compromise is to have the factory class in the same assembly as the business classes so you can use internal methods. And this can work well as long as the data access code is in another assembly.

Data Access in Object Factory

When using a factory object, you may choose to put the data access code directly into the factory object itself. The data portal will invoke this factory object, and the factory object must manage all interaction with the business object itself.

This approach is the simplest way to use the object factory model but does restrict flexibility. In particular, if you want the data access layer to be in its own assembly, the factory classes must be in that separate assembly. That prevents use of internal scoped methods on the business object to manage interaction between the factory and business objects.

Assuming the factory object is in a separate assembly, here's an example of a factory class with a Fetch() method:

public class CustomerFactory : ObjectFactory
  {
    public object Fetch(SingleCriteria<Customer, int> criteria)
    {
      using (var ctx = ConnectionManager<SqlConnection>.GetManager("MyDb"))
      {
        using (var cm = ctx.Connection.CreateCommand())
        {
          cm.CommandType = CommandTypes.Text;
          cm.CommandText = "SELECT Id, Name FROM Customer WHERE id=@id";
          cm.Parameters.AddWithValue("@id", criteria.Value);
          using (var dr = new SafeDataReader(cm.ExecuteReader()))
          {
            dr.Read();
            var result = (BusinessLibrary.CustomerEdit)Activator.CreateInstance(
              typeof(BusinessLibrary.CustomerEdit), true);
            MarkNew(result);
            result.LoadData(dr);
            return result;
          }
        }
      }
    }
  }

Inheriting from ObjectFactory (in the Csla.Server namespace) is strictly optional but does provide access to the MarkNew(), MarkOld(), and MarkAsChild() methods. If you don't inherit from ObjectFactory, you'll need to come up with your own way to invoke these methods on the business objects.

Notice that this implementation requires that CustomerEdit implement a LoadData() method. That method may look like this:

public void LoadData(SafeDataReader dr)
  {
    LoadProperty(IdProperty, dr.GetInt32("Id"));
    LoadProperty(NameProperty, dr.GetString("Name"));
  }

The important points are that all the data access code (opening the database, executing any queries, etc.) are all in the factory class, which is in an assembly that is only deployed to the application server (assuming the data portal is configured to use a remote app server). And encapsulation is preserved because the business object is managing its own internal fields within the LoadData() method.

This is not really ideal though because the business object is tightly coupled to the DataReader provided by the factory object. And this only works because the DataReader type is shared by both the business and factory assemblies. If the factory assembly used LINQ to SQL or the ADO.NET Entity Framework, this model wouldn't work because the business assembly would not have access to the entity types defined in the factory assembly, so it would be impossible to write a LoadData() method. In these cases, you must either use reflection to load the business object with data or switch to the more complex but flexible model where the factories and data access are in separate assemblies.

Object Factory Invokes Separate Data Access

The more flexible and powerful way to use an object factory is to design the factory objects to interact with a separate data access layer. In this case, you have three logical elements:

  • Business object

  • Factory object

  • Data access object

The factory object may be in the business assembly, in its own assembly or in the data access assembly. There's a strong advantage to having it in the business assembly, however, because then it can use internal methods of the business object to get and set the business object's data. This is the approach I'll show here.

In the following code, I assume that the factory object is in the business assembly and the data access code is in a separate assembly. I also assume the use of the ADO.NET Entity Framework to create the data access assembly. The factory object could look like this:

public class CustomerFactory : ObjectFactory
  {
    public object Fetch(SingleCriteria<CustomerEdit, int> criteria)
    {
      using (var ctx = ObjectContextManager<MyDbEntities>.GetManager("MyDb"))
      {
        var data = from r in ctx.ObjectContext.Customer
                         where r.Id == criteria.Value
                         select r;
        var result = (BusinessLibrary.CustomerEdit)Activator.CreateInstance(
          typeof(BusinessLibrary.CustomerEdit), true);
        MarkNew(result);
        result.LoadData(data.Single());
        return result;
      }
    }
  }

This code uses an entity model defined using ADO.NET Entity Framework in another assembly to interact with the database. To do this, the business assembly must reference the data access assembly so the factory object (and the business object) has access to the entity types.

The business object's LoadData() method can be internal and can use the entity types:

internal void LoadData(Customer data)
  {
    LoadProperty(IdProperty, data.Id);
    LoadProperty(NameProperty, data.Name);
  }

Having the factory class in the same assembly as the business class allows the use of the internal scope, so LoadData() isn't visible to UI or other code. However, having the factory class in the business assembly does reduce the physical separation of the two layers slightly.

You could move the factory class into its own assembly, separate from the data access assembly. Both the factory and business assemblies would reference the data access assembly, and the code shown here would work the same way; except the LoadData() method would be public in that case.

As you can see, there are many options for building and using a data access layer with business objects. The DataPortal_XYZ model puts the business object in charge of the process, while the ObjectFactory model puts all responsibility on the factory object. Either way, you can choose to put the data access code into a separate assembly, at the cost of increased complexity.

In fact, these choices are all a matter of balancing competing concerns. I discuss some of these issues next.

Balancing Design Issues

There are several important concepts that must be balanced when designing a data access layer for business objects, including the following:

  • Performance

  • Encapsulation

  • Layering

  • Complexity

Obviously, performance is an important consideration, and the decisions you make concerning data access can have a major impact on overall application performance.

A primary goal of CSLA .NET is to enable object-oriented principles in design and programming of the business layer, so those principles should be honored.

As I discuss in Chapter 1 and earlier in this chapter, it is important to maintain clear separation between the business and data access layers. This increases maintainability and reduces the cost of the application over its lifetime.

Even layering can't solve all problems. Sadly, it is quite easy to come up with overly complex solutions to many problems. Trying to balance performance with layering can lead to some very complex solutions, and that reduces maintainability and drives up the cost of software. You must be alert for complexity and try to avoid it when possible.

The core challenge in design is to preserve encapsulation, while not harming performance (overmuch) or increasing complexity too badly.

Business objects contain their own data, which follows the basic object-oriented principle of encapsulation. Objects encapsulate behaviors and the data required to implement those behaviors. Other objects should never be allowed to directly manipulate the data encapsulated within an object because that would break encapsulation.

However, the data access layer needs access to the data in each business object. The data access layer needs to get data from the database and put it into the business object and get data from the object to put into the database. The question of how to do this without breaking encapsulation and without causing serious performance issues is perhaps the single biggest challenge you'll face when designing a data access layer for your business objects.

Note

In this discussion, I assume you are not putting the data access code directly into the DataPortal_XYZ methods. This approach is the simplest model and avoids most of the trade-offs discussed here but it does have the drawback of potentially blurring the boundary between the business and data access layers, which is why many people choose to accept the complexity of having an external data access object and/or assembly.

This issue occurs whether you use private or managed backing fields, as discussed in Chapter 7. Either way, the management of the object's field values is private to the business object, and other objects can't (and shouldn't) directly manipulate the business object's state.

You are left with two primary options:

  • Have the business object manipulate its own state data.

  • Break encapsulation by having other objects manipulate the business object's state data.

Preserving Encapsulation

If you choose to preserve encapsulation, your data access layer must accept or provide data to the business object without manipulating the business object directly. For example, consider this DataPortal_Fetch() method from a business object:

private void DataPortal_Fetch(SingleCriteria<CustomerEdit, int> criteria)
  {
    using (var dal = new CustomerDal())
    {
      CustomerData data = dal.GetCustomer(criteria.Value);
      LoadProperty(IdProperty, data.Id);
      LoadProperty(NameProperty, data.Name);
    }
  }

In this example, the data access layer is implemented in an object called CustomerDal, which exposes a GetCustomer() method. That method doesn't set the fields of the business object directly but instead returns some DTO of type CustomerData. The DTO might look like this:

public class CustomerData
  {
    public int Id { get; set; }
    public string Name { get; set; }
  }

The business object can then load its own field data from that DTO. Each LoadProperty() method call copies a value from the DTO to the business object's field. This preserves encapsulation because no object directly manipulates the internal state of another object.

This technique provides a high level of flexibility and decoupling so is very powerful. It does incur some overhead, however, because the CustomerDal object copies the data from the database into a CustomerData object (the DTO) and the data is copied from there into the fields of the business object.

Earlier in the chapter, I discussed some code where the data access layer returns an open DataReader and that avoids this double copy of the data. The drawback of that technique is that it couples the data access and business objects to raw ADO.NET, so while it is faster it is less flexible.

There are numerous variations on this theme, for example, using an ObjectFactory and LoadData() method as shown earlier in the chapter. And you might use LINQ to SQL or the ADO.NET Entity Framework to create your DTO or entity objects rather than coding them by hand. But the overall design concept remains consistent and provides good separation of concerns but with some overhead in terms of performance and complexity.

Breaking Encapsulation

You may choose to break encapsulation, allowing the data access layer to directly manipulate the data inside each business object. Generally speaking, breaking encapsulation is a bad thing because it increases coupling and reduces maintainability. However, many ORM technologies are designed specifically to manipulate the internal states of objects. This is somewhat more acceptable than writing such code yourself because it is a framework or tool that is breaking encapsulation, not user code.

If you choose to use a preexisting ORM tool that is capable of creating your business object instances and loading the objects with data, you will probably want to use the ObjectFactory model discussed earlier in the chapter. In this case, the ObjectFactory acts as a coordinator, invoking the ORM tool so it can create your business objects and load them with data. When that's done, the factory object can return the business object to the data portal.

There are many ORM tools available in the .NET ecosystem, and a detailed discussion of using them is outside the scope of this book. You should know that the primary motivation for adding the ObjectFactory concept to CSLA .NET is to open the possibility of using ORM tools in this manner. Whether any specific ORM tool can or can't create CSLA .NET business objects will depend on that tool.

You may choose to break encapsulation yourself by writing your own data access component that directly loads business objects with data. There are several techniques you can use in this case, including the following:

  • Use reflection to set the private fields of the business object or to call the protected LoadProperty() methods.

  • Use dynamic method invocation to set the private fields of the business object or to call the protected LoadProperty() methods.

  • Create an interface that exposes a duplicate set of the business object properties and implement this interface to call ReadProperty() and LoadProperty(), thus bypassing authorization and validation logic.

In all cases, the point is to allow an external object (the data access object) to directly manipulate the data of the business object without invoking the normal authorization, business, or validation rules of the business object.

Ideally, this will be done in a way that doesn't allow other code, such as UI code, to also break encapsulation. Breaking encapsulation is a form of cheating, and if it becomes widespread, it can be the source of many bugs, which will reduce the maintainability and increase the cost of your application.

Unfortunately, the "safe" solutions such as reflection do have negative implications for performance, while the "unsafe" solutions such as a public interface can be quite fast but open the door to abuse by UI code or other application code.

Overall, I recommend that you preserve encapsulation if at all possible. While it may take a little extra work to do so, the result is a more maintainable application that preserves the core tenets of object-oriented programming and design.

At this point, you should have an understanding of the flexibility offered by CSLA .NET in terms of object persistence. The rest of this chapter focuses on the implementation of a data access layer for the example ProjectTracker objects I discuss in Chapter 17.

Data Access Objects

In Chapter 17, I walk through the implementation of the example ProjectTracker business objects, following the design from Chapter 3. The focus in Chapter 17 is on property declarations as well as business, validation, and authorization rules.

The rest of this chapter focuses on those same objects but in terms of persistence. I discuss the factory and data access methods and how they are implemented.

For this application I have chosen to use LINQ to SQL to create a data access layer in a project named ProjectTracker.DalLinq. You'll find this project in the ProjectTracker solution as part of the code download.

Using LINQ to SQL

LINQ to SQL and the ADO.NET Entity Framework are the newest data access technologies available in .NET. Many people using the ADO.NET Entity Framework also use LINQ to Entities, which results in data access code that is very similar to LINQ to SQL code. In other words, you can use LINQ to SQL or the ADO.NET Entity Framework to accomplish the same task, using almost the same code shown here.

I chose to use LINQ to SQL because it was released several months ahead of the ADO.NET Entity Framework, so I have had more experience with this particular technology. Also, I like the way LINQ to SQL supports the use of stored procedures. The Visual Studio LINQ to SQL designer automatically wraps stored procedures with a set of strongly typed .NET methods, making them very easy to use.

I like stored procedures a great deal because they offer clean separation between the physical database structure and the logical interactions I want to have with the database from my application. This separation maintains a good boundary between the data access and data storage layers discussed in Chapter 1.

Note

As time goes on, I fully expect to enhance ProjectTracker to use the ADO.NET Entity Framework. If you download the latest version of CSLA .NET and the ProjectTracker reference application, you will likely find examples of this technology, alongside LINQ to SQL.

Previous versions of the ProjectTracker reference application (for CSLA .NET 3.0 and older) used raw ADO.NET Connection, Command, and DataReader objects. Those techniques continue to work as in the past, and you should feel free to download and examine older versions of the ProjectTracker code for examples of using that approach. All code is available from www.lhotka.net/cslanet.

The data access code in ProjectTracker is designed following the DataPortal_XYZ model discussed earlier in this chapter. This means the data portal invokes a DataPortal_XYZ method, which uses the LINQ to SQL data access objects to interact with the database. The data access objects are in a separate assembly, which only needs to be deployed to the server (assuming the data portal is configured to use a remote application server).

The ProjectTracker.DalLinq Project

The data access code is in a project named ProjectTracker.DalLinq. I use the Visual Studio LINQ to SQL designer to create the data access layer, only writing one simple class manually.

I added a LINQ to SQL class, shown in Figure 18-1, by adding a new item to the project.

Adding a LINQ to SQL class

Figure 18.1. Adding a LINQ to SQL class

The result is a blank designer surface. You can drag and drop items onto this surface from the Data Connections node in the Server Explorer window in Visual Studio. I chose to drag all the tables and stored procedures from the PTracker.mdf database onto the designer. The tables go on the left, the stored procedures on the right. Figure 18-2 shows the resulting designer surface.

Populated LINQ to SQL designer surface

Figure 18.2. Populated LINQ to SQL designer surface

It is important to realize that the Visual Studio LINQ to SQL designer does not generate the right code for stored procedures that return multiple result sets. The getProject and getResource stored procedures do return multiple result sets but the wrapper code created by LINQ to SQL will only return the first result set.

Note

This is not a limitation of LINQ to SQL itself but rather of the Visual Studio designer. If you don't use the designer, you can create LINQ to SQL wrapper code that does return multiple result sets.

The reason these stored procedures return multiple result sets is because they already exist in the database from previous versions of ProjectTracker. The older version of ProjectTracker used direct ADO.NET calls to get a DataReader containing both the parent and child data in a single call to the database.

While I could have rewritten the stored procedures, splitting each one into two separate stored procedures, I chose to use this as an opportunity to illustrate how you might deal with similar preexisting situations. There are limited options:

  • Rework the stored procedures and all existing code that uses them.

  • Create new, stored procedures (keeping the old ones) and maintain both sets into the future.

  • Use the dynamic query features of LINQ to SQL to directly query the database.

Neither of the first two options is attractive, as they require either changes to existing code or duplication of code into the future. So I have decided, for this example, to use the third option and avoid those particular stored procedures altogether.

This has the benefit of allowing ProjectTracker to illustrate how to use LINQ to SQL with and without stored procedures, and you can see examples of both techniques as you look through the code.

The ProjectTracker.DalLinq project also includes Security.dbml, which contains the LINQ to SQL objects for the security database.

These dbml files represent the LINQ to SQL objects corresponding to the underlying database. When the project is built, this information is used to generate .NET classes in the shape of each table and .NET methods that wrap each stored procedure. Effectively, the dbml files define the data access layer for the application.

Other than dragging and dropping items from the Server Explorer onto the LINQ to SQL designer surfaces, I wrote absolutely no code to create the data access layer.

The one class I did manually add to the project simply contains the names of the databases, corresponding to the connection string entries expected in the application's config file:

public class Database
{
  public const string PTracker = "PTracker";
  public const string Security = "Security";
  }

These values are used by the DataPortal_XYZ code so that code doesn't need to hard-code the database names. The result is that the code in the ProjectTracker.Library assembly (which is deployed to the client) doesn't even know the name of the key needed to find the connection string for the database. That information is contained in the data access assembly.

Next, I walk through the code for some of the business objects to illustrate how they interact with these data access objects.

Business Class Implementation

With all the data access logic encapsulated by the LINQ to SQL objects in the ProjectTracker.DalLinq project, the business objects in ProjectTracker.Library can implement their persistence code.

The ProjectTracker.Library assembly does need a reference to the data access library, as shown in Figure 18-3.

With this set up, I'll walk through the first few classes in detail. The other classes are very similar, so for those I'll discuss only the key features. Of course, the complete code for all classes is available in the code download for the book.

ProjectTracker.Library references ProjectTracker.DalLinq

Figure 18.3. ProjectTracker.Library references ProjectTracker.DalLinq

Project

The Project class is an editable root class that represents a single project in the application. Root objects are the only objects that can be directly retrieved or updated through the data portal, so this is a good place to start.

Also, a Project contains a child list named ProjectResources, which contains ProjectResource child objects. I'll walk through each of these classes so you can see how a parent-child, or one-to-many, relationship is handled.

Factory Methods

Creating and retrieving an object is managed by factory methods as discussed in Chapter 1. The factory design pattern is powerful because it abstracts the creation or retrieval process from both the caller (typically the UI) and the business object itself.

In a typical CSLA-style business object, the default constructor is declared with non-public scope (either private or protected) to force the use of the factory methods for creating or retrieving the object. While this is not strictly necessary, it is a good thing to do. Without making the constructor private, it is far too easy for a UI developer to forget to use the factory method and to instead use the new keyword to create the object, leading to bugs in the UI code.

I'll start by looking at the factory methods:

public static Project NewProject()
  {
    return DataPortal.Create<Project>();
  }

  public static Project GetProject(Guid id)
  {
    return DataPortal.Fetch<Project>(new SingleCriteria<Project, Guid>(id));
  }
public static void DeleteProject(Guid id)
  {
    DataPortal.Delete(new SingleCriteria<Project, Guid>(id));
  }

The NewProject() method creates a new instance of Project, which loads default values from the database if required. To do this, it simply calls DataPortal.Create() to trigger the data portal process, as discussed in Chapter 15. The data portal automatically checks the per-type authorization, as discussed in Chapter 12, to determine whether the user is authorized to add a new Project to the system. If the user isn't authorized, there's no sense even creating a new instance of the object so an exception is thrown.

Note

Ideally, this authorization exception would never be thrown. Good UI design dictates that the UI should hide or disable the options that would allow users to add a new object if they aren't authorized to do so. If that is done properly, the user should never be able to even attempt to create a new object if they aren't authorized. The UI developer can call Csla.Security.AuthorizationRules.CanCreateObject() to determine whether to disable or hide UI elements.

The GetProject() factory method retrieves an existing Project object, which is populated with data from the database. This method accepts the primary key value for the data as a parameter and passes it to DataPortal.Fetch() through a new SingleCriteria object.

The data portal ultimately creates a new Project object and calls its DataPortal_Fetch() method to do the actual data access. The criteria object is passed through this process, so the DataPortal_Fetch() method will have access to the Guid key value.

Again, the data portal checks the per-type authorization rules to determine whether the user is allowed to get an existing object. If not, an exception is thrown.

There's also a static method to allow immediate deletion of a Project. Authorization is checked first to ensure that the user is allowed to delete the data. DeleteProject() accepts the primary key value for the data and uses it to create a SingleCriteria object. It then calls DataPortal.Delete() to trigger the deletion process, ultimately resulting in the object's DataPortal_Delete() method being invoked to do the actual deletion of the data.

Non-Public Constructor

As noted earlier, all business objects should include a default constructor, as shown here:

private Project()
    { /* require use of factory methods */ }

This is straight out of the template from Chapter 5. It ensures that client code must use the factory methods to create or retrieve a Project object, and it provides the data portal with a constructor that it can call via reflection.

Overriding Save

The default implementation for Save() is good—it checks to ensure the object is valid and dirty before saving. It also checks the per-type authorization rules in Chapter 12 to ensure the user is allowed to update or delete the object.

In some advanced scenarios, you may need to override Save() to customize its behavior, but that should not normally be required. If you do override Save(), you should also override the asynchronous BeginSave() equivalent:

public override void Save()
    {
      // do custom work here
      return base.Save();
    }

    public override void BeginSave(
      EventHandler<Csla.Core.SavedEventArgs> handler,
      object userState)
    {
      // do custom work here
      base.BeginSave(handler, userState);
    }

Like most classes, the Project class has no reason to override these methods. I'm showing this code here as an example of what you might do if necessary.

The factory methods and Save() method ultimately invoke either DataPortal_XYZ methods on the business object or methods on a factory object. Since the Project class doesn't have an ObjectFactory attribute, the data portal invokes DataPortal_XYZ methods.

Data Access

The Data Access region implements the DataPortal_XYZ methods that support the creation, retrieval, addition, updating, and deletion of a Project object's data. Because this is an editable root object, it implements all the possible methods:

  • DataPortal_Create()

  • DataPortal_Fetch()

  • DataPortal_Insert()

  • DataPortal_Update()

  • DataPortal_DeleteSelf()

  • DataPortal_Delete()

The fetch and delete factory methods pass a SingleCriteria object to the data portal. SingleCriteria is a CSLA .NET type that contains a single criteria value. As discussed in Chapters 4 and 5, if you need more complex criteria you need to create your own criteria class.

In this sample application, the DataPortal_XYZ methods are relatively straightforward, simply calling the LINQ to SQL data access objects. Keep in mind, however, that these routines could be much more complex, interacting with multiple databases, merging data from various sources, and doing whatever is required to retrieve and update data in your business environment.

Handling Transactions

As discussed in Chapters 2 and 15, the data portal supports three transactional models: manual, Enterprise Services, and System.Transactions. The preferred model for performance and simplicity is System.Transactions, and so that is the model used in the sample application.

This means that each method that updates data is decorated with the Transactional(TransactionTypes.TransactionScope) attribute. Since this tells the data portal to wrap the code in a TransactionScope object, there's no need to write any LINQ to SQL, ADO.NET, or stored procedure transactional code. All the transaction details are handled by the TransactionScope object from System.Transactions.

As you look at the data access code, notice that it never actually catches any exceptions. The code leverages using blocks to ensure that the LINQ to SQL context object (and thus any database connection, command, and DataReader objects) is disposed properly, but no exceptions are caught. The reasons for this are twofold:

  • First, the code uses the Transactional attribute, which causes it to run within a System.Transactions transactional context. An exception automatically causes the transaction to be rolled back, which is exactly the desired result. If the exceptions are caught, the transaction won't be rolled back and the application would misbehave.

  • Second, if an exception occurs, normal processing shouldn't continue. Instead, the client code needs to be notified that the operation failed and why. Returning the exception to the client code allows the client code to know that there was a problem during data access. The client code can then choose how to handle that the object couldn't be created, retrieved, updated, or deleted. Remember that the original exception is wrapped in a DataPortalException, which includes extra information that can be used by the client when handling the exception.

DataPortal_Create

The DataPortal_Create() method is called by the data portal when it is asked to create a new Project object. In some cases, this method loads the new object with default values from the database, and in simpler cases it may load hard-coded defaults or set no defaults at all.

The Project object has no need for loading default values, so the DataPortal_Create() method simply loads some default, hard-coded values rather than talking to the database:

[RunLocal]
  protected override void DataPortal_Create()
  {
    LoadProperty(IdProperty, Guid.NewGuid());
    Started = System.Convert.ToString(System.DateTime.Today);
    ValidationRules.CheckRules();
  }

The method is decorated with the RunLocal attribute because it doesn't do any data access but rather sets hard-coded or calculated default values. If the method did load default values from the database, the RunLocal attribute would not be applied, causing the data portal to run the code on the application server. With the RunLocal attribute on the method, the data portal short-circuits its processing and runs this method locally.

Note

In a more complex object, in which default values come from the database, this method would call data access code to retrieve those values and use them to initialize the object's fields. In that case, the RunLocal attribute would not be used.

Notice how the code directly alters the state of the object. For instance, the Id property value is set to a new Guid value by calling LoadProperty(). If your object uses private backing fields, as discussed in Chapter 7, it would directly set the field values.

Since not all properties can be set, it is best to be consistent and always set fields directly. Setting a property causes the authorization, business, and validation rules for that property to be checked. That's a lot of overhead for data you are simply loading from the database. And it is possible that values from the database won't meet the business or validation rules or that the user isn't allowed to change the property (even though you want to load it from the database).

In short, you shouldn't set the properties; you must set the backing field (managed or private).

Of course, the default values set in a new object might not conform to the object's validation rules. In fact, the Name property starts out as an empty string value, which means it is invalid, since that is a required property. This is specified in the AddBusinessRules() method in Chapter 17 by associating this property with the StringRequired rule method.

To ensure that all validation rules are run against the newly created object's data, ValidationRules.CheckRules() is called. Calling this method with no parameters causes it to run all the validation rules associated with all properties of the object, as defined in the object's AddBusinessRules() method.

Tip

BusinessBase includes a default DataPortal_Create() implementation that is marked as RunLocal and calls ValidationRules.CheckRules(). For many simple objects, you do not need to override DataPortal_Create() at all.

The end result is that the new object is loaded with default values and those values are validated. The new object is then returned by the data portal to the factory method (NewProject(), in this case), which typically returns it to the UI code.

DataPortal_Fetch

More interesting and complex is the DataPortal_Fetch() method, which is called by the data portal to tell the object that it should load its data from the database (or other data source). The sample method accepts a SingleCriteria object as a parameter, which contains the criteria data needed to identify the data to load:

private void DataPortal_Fetch(SingleCriteria<Project, Guid> criteria)
  {
    using (var ctx =
      ContextManager<ProjectTracker.DalLinq.PTrackerDataContext>.
      GetManager(ProjectTracker.DalLinq.Database.PTracker))
    {
      // get project data
      var data = (from p in ctx.DataContext.Projects
                  where p.Id == criteria.Value
                  select p).Single();
      LoadProperty(IdProperty, data.Id);
      LoadProperty(NameProperty, data.Name);
      LoadPropertyConvert<SmartDate, System.DateTime?>(
        StartedProperty, data.Started);
      LoadPropertyConvert<SmartDate, System.DateTime?>(
        EndedProperty, data.Ended);
      LoadProperty(DescriptionProperty, data.Description);
      _timestamp = data.LastChanged.ToArray();

      // get child data
      LoadProperty(
        ResourcesProperty,
        ProjectResources.GetProjectResources(
          data.Assignments.ToArray()));
    }
  }

This method is not marked with either the RunLocal or Transactional attributes. Since it does interact with the database, RunLocal is inappropriate. That attribute could prevent the data portal from running this code on the application server, causing runtime errors when the database is inaccessible. Also, since this method doesn't update any data, it doesn't need transactional protection, and so there's no need for the Transactional attribute.

Remember that the data portal, as discussed in Chapter 15, will invoke the appropriate DataPortal_Fetch() overload based on the type of criteria parameter value. It is possible to have multiple DataPortal_Fetch() overloads with different criteria parameter types.

You should also notice that no exceptions are caught by this code. If the requested Id value doesn't exist in the database, the result will be a SQL exception, which will automatically flow back through the data portal to the UI code contained within a DataPortalException. This is intentional, as it allows the UI to have full access to the exception's details so the UI can decide how to notify the user that the data doesn't exist in the database.

The first thing the method does is use the ContextManager to open a LINQ to SQL context object and connection to the database:

using (var ctx =
        ContextManager<ProjectTracker.DalLinq.PTrackerDataContext>.
        GetManager(ProjectTracker.DalLinq.Database.PTracker))

The ContextManager, ConnectionManager, and ObjectContextManager classes are in the Csla.Data namespace and provide an abstract way to work with LINQ to SQL context objects, ADO.NET connection objects, and ADO.NET Entity Framework context objects, respectively. These are discussed in Chapter 16.

Then, within the using block, a LINQ query is used to retrieve the project data:

var data = (from p in ctx.DataContext.Projects
                    where p.Id == criteria.Value
                    select p).Single();

Note the use of the criteria.Value parameter. This is the value that is provided by the code that calls the factory method. Since this query returns only one row of data, the Single() method is used to return just the one value rather than a list of values. This means that the data field contains a value of type ProjectTracker.DalLinq.Project, a type defined by the data access layer.

The data field is then used to populate the object like this:

LoadProperty(IdProperty, data.Id);
        LoadProperty(NameProperty, data.Name);
        LoadPropertyConvert<SmartDate, System.DateTime?>(
          StartedProperty, data.Started);
        LoadPropertyConvert<SmartDate, System.DateTime?>(
          EndedProperty, data.Ended);
        LoadProperty(DescriptionProperty, data.Description);
        _timestamp = data.LastChanged.ToArray();

There is no need to cast the values, as they are already in .NET types thanks to LINQ to SQL. The only exceptions are the DateTime? values, which must be converted to a SmartDate. This is handled by using the LoadPropertyConvert() method instead of LoadProperty(). These helper methods are discussed in Chapter 7.

Also, notice that the LastChanged column is retrieved and placed into the _timestamp byte array. This value is never exposed outside the object but is maintained for later use if the object is updated. LastChanged is a timestamp value in the database table and is used by the updateProject stored procedure to implement first-write-wins optimistic concurrency. The object must be able to provide updateProject with the original timestamp value that is in the table when the data is first loaded.

Since the _timestamp value is never exposed as a property of the object, it isn't managed by the field manager. Instead, I'm storing it using a simple private field.

At this point, the Project object's fields have been loaded. But Project contains a collection of child objects and they need to be loaded as well. LINQ to SQL is helpful here because it understands the relationships between the database tables (see Figure 18-2) that it inferred from the database schema. This means the data field has an Assignments property which returns a list of Assignment entity objects from the data access layer. This list is passed to the factory method for the ProjectResources object, an editable child list:

LoadProperty(
          ResourcesProperty,
          ProjectResources.GetProjectResources(
            data.Assignments.ToArray()));

It is important to realize that the data.Assignments.ToArray() call causes LINQ to SQL to make a second query to the database to retrieve the list of assignments. LINQ to SQL uses a lazy loading scheme and only retrieves data when necessary.

You can force LINQ to SQL to eager load the Assignments data by adding this line of code before executing the first LINQ query:

ctx.DataContext.LoadOptions.
          LoadWith<ProjectTracker.DalLinq.Project>(c => c.Assignments);

The LoadWith() method specifies that any time a Project is retrieved, the Assignments data should be retrieved immediately.

Now that the object contains data loaded directly from the database, it is an "old" object. The definition of an old object is that the primary key value in the object matches a primary key value in the database. In Chapter 15, the data portal is implemented to automatically call the object's MarkOld() method before DataPortal_Fetch() is invoked. That ensures that the object's IsNew and IsDirty properties will return false and that your code in DataPortal_Fetch() can use those properties if desired.

DataPortal_Insert

The DataPortal_Insert() method handles the case in which a new object needs to insert its data into the database. It is invoked by the data portal as a result of the UI calling the object's Save() method when the object's IsNew property is true.

As with all the methods that change the database, this one is marked with the Transactional attribute to ensure that the code is transactionally protected:

[Transactional(TransactionalTypes.TransactionScope)]
  protected override void DataPortal_Insert()
  {
    using (var ctx =
      ContextManager<ProjectTracker.DalLinq.PTrackerDataContext>.
      GetManager(ProjectTracker.DalLinq.Database.PTracker))
    {
      // insert project data
      System.Data.Linq.Binary lastChanged = null;
      ctx.DataContext.addProject(
        ReadProperty(IdProperty),
        ReadProperty(NameProperty),
        ReadProperty(StartedProperty),
        ReadProperty(EndedProperty),
        ReadProperty(DescriptionProperty),
        ref lastChanged);
      _timestamp = lastChanged.ToArray();
// update child objects
      FieldManager.UpdateChildren(this);
    }
  }

Like DataPortal_Fetch(), the DataPortal_Insert() method opens a LINQ to SQL context and thus a connection to the database. It then invokes the addProject stored procedure, using the strongly typed .NET wrapper created by the LINQ to SQL designer:

ctx.DataContext.addProject(
          ReadProperty(IdProperty),
          ReadProperty(NameProperty),
          ReadProperty(StartedProperty),
          ReadProperty(EndedProperty),
          ReadProperty(DescriptionProperty),
          ref lastChanged);

Again, there is no need to cast the values because the addProject() method is strongly typed using .NET types. Any conversion to database types is handled by LINQ to SQL automatically. Even the SmartDate types work here because the SmartDate type can be cast to DateTime? and the compiler handles that automatically.

The lastChanged parameter is passed by ref because the stored procedure returns the timestamp value generated as the data is inserted into the table. This value is then set into the _timestamp field:

_timestamp = lastChanged.ToArray();

Other stored procedures may return values as well. For example, in the Resource class the addResource stored procedure not only returns a timestamp but also a database-generated int ID value:

ctx.DataContext.addResource(
        _lastName, _firstName, ref newId, ref newLastChanged);
      _id = System.Convert.ToInt32(newId);
        _timestamp = newLastChanged.ToArray();

Notice how both parameters are passed by ref.

Back in the Project class, once the call to addProject() is complete, the Project object's data is in the database. However, a Project contains child objects, and their data must be added to the database as well. The field manager is helpful here because it has an UpdateChildren() method that automatically updates all child objects contained by the current object:

FieldManager.UpdateChildren(this);

The UpdateChildren() method calls DataPortal.UpdateChild() to update each child object contained in the Project. In this case, that means the ProjectResources collection is updated. I show the code for the collection later in the "ProjectResources" section.

The fact that this is passed as a parameter is important because the child objects contained in the collection have a foreign key relationship to the project. By passing this as a parameter, the child objects have access to the Project object's Id property and thus to the foreign key value they require.

Note

You could just pass the Id property value but that would cause tighter coupling between the parent and its children. By passing a reference to the Project object, the parent doesn't know or care what information is required by its children because it has made all the information available. The less one object knows about the needs of another, the better.

Once DataPortal_Insert() is complete, the data portal automatically invokes the MarkOld() method on the object, ensuring that the IsNew and IsDirty properties are both false. Since the object's primary key value in memory now matches a primary key value in the database, it is not new; and since the rest of the object's data values match those in the database, it is not dirty.

Once all the objects have inserted their data, the database transaction completes. Recall that the DataPortal_Insert() method is decorated with the Transactional attribute and the data portal automatically commits the transaction and disposes the TransactionScope object. However, if the object's code throws an exception, the transaction is automatically rolled back, so no changes are made to the database.

DataPortal_Update

The DataPortal_Update() method is almost identical to DataPortal_Insert() but it is called by the data portal in the case that IsNew is false. It calls the updateProject() method from the data access layer, which invokes the updateProject stored procedure in the database:

ctx.DataContext.updateProject(
        ReadProperty(IdProperty),
        ReadProperty(NameProperty),
        ReadProperty(StartedProperty),
        ReadProperty(EndedProperty),
        ReadProperty(DescriptionProperty),
        _timestamp,
        ref lastChanged);
        _timestamp = lastChanged.ToArray();

However, the updateProject stored procedure requires one extra parameter not required by addProject: the timestamp value for the LastChanged column. This is required for the first-write-wins optimistic concurrency implemented by the stored procedure. The goal is to ensure that multiple users can't overwrite each other's changes to the data. Other than this one extra parameter, the DataPortal_Update() method is very similar to DataPortal_Insert().

DataPortal_DeleteSelf

The final method that the data portal may invoke when the UI calls the object's Save() method is DataPortal_DeleteSelf(). This method is invoked if the object's IsDeleted property is true and its IsNew property is false. In this case, the object needs to delete itself from the database.

Remember that there are two ways objects can be deleted: through immediate or deferred deletion. Deferred deletion is when the object is loaded into memory, its IsDeleted property is set to true, and Save() is called. Immediate deletion is when a factory method is called and passes criteria identifying the object to the DataPortal.Delete() method.

In the case of immediate deletion, the data portal ultimately calls DataPortal_Delete(), passing the criteria object to that method so it knows which data to delete. Deferred deletion calls DataPortal_DeleteSelf(), passing no criteria object because the object is fully populated with data already.

Note

Implementing the DataPortal_DeleteSelf() method is only required if your object supports deferred deletion. In the Project object, deferred deletion is not supported, but I am implementing the method anyway to illustrate how it is done.

The simplest way to implement DataPortal_DeleteSelf() is to create a criteria object and delegate the call to DataPortal_Delete():

[Transactional(TransactionalTypes.TransactionScope)]
  protected override void DataPortal_DeleteSelf()
  {
    DataPortal_Delete(
      new SingleCriteria<Project, Guid>(ReadProperty(IdProperty)));
  }

You might wonder why the data portal couldn't do this for you automatically. But remember that the data portal has no idea what values are required to identify your business object's data. Thus, you must create the criteria object and pass it to DataPortal_Delete().

DataPortal_Delete

The final data portal method is DataPortal_Delete(). This method is called from two possible sources; if immediate deletion is used, the UI will call the static deletion method, which will call DataPortal_Delete(); and if deferred deletion is used, DataPortal_Delete() is called by DataPortal_DeleteSelf(). A SingleCriteria object is passed as a parameter, identifying the data to be deleted. Then it's just a matter of calling the deleteProject stored procedure as follows:

[Transactional(TransactionalTypes.TransactionScope)]
  private void DataPortal_Delete(SingleCriteria<Project, Guid> criteria)
  {
    using (var ctx =
      ContextManager<ProjectTracker.DalLinq.PTrackerDataContext>.
      GetManager(ProjectTracker.DalLinq.Database.PTracker))
    {
      // delete project data
      ctx.DataContext.deleteProject(criteria.Value);
      // reset child list field
      LoadProperty(ResourcesProperty, ProjectResources.NewProjectResources());
    }
  }

The method just opens a LINQ to SQL context and calls the deleteProject stored procedure. That stored procedure deletes the project data and any assignment data related to the project.

The ProjectResources child collection is replaced with an empty collection. Since the stored procedure just removed the data in the database, the collection should be empty as well.

In the downloaded code for this book, you'll also see a code region for an Exists command, which I discuss later in the "Implementing Exists Methods" section.

ProjectResources

A Project object contains a collection of child objects, each one representing a resource assigned to the project.

Child objects follow the same basic structure as a root object, in that they have factory methods that call the data portal, and the data portal invokes methods to do the data access. Those are called the Child_XYZ methods.

Factory Methods

The Factory Methods region contains two factory methods and a private constructor, much like the Project class.

The two factory methods are declared as internal scope since they are not for use by the UI code. Rather, they are intended for use by the Project object that contains the collection:

internal static ProjectResources NewProjectResources()
  {
    return DataPortal.CreateChild<ProjectResources>();
  }

  internal static ProjectResources GetProjectResources(
    ProjectTracker.DalLinq.Assignment[] data)
  {
    return DataPortal.FetchChild<ProjectResources>(data);
  }

  private ProjectResources()
  { /* require use of factory methods */ }

In both cases, the factory methods simply call the data portal to create or retrieve the collection object. The CreateChild() method causes the data portal to invoke a Child_Create() method, while FetchChild() causes invocation of Child_Fetch(). The advantage of using the data portal in this manner is that it automatically manages the object state, setting IsChild, IsNew, and IsDirty automatically.

Data Access

The Data Access region in a child collection object is very similar to that of a root object such as Project. Instead of containing DataPortal_XYZ methods, it contains Child_XYZ methods. These are somewhat different but are intended to do the same things: create, retrieve, insert, update, and delete the object's data.

In the DataPortal_Fetch() method of Project, a call is made to the GetProjectResources() factory method in ProjectResources. That factory method calls the data portal's ChildFetch() method, passing the data access object from the Project object as a parameter:

private void Child_Fetch(
    ProjectTracker.DalLinq.Assignment[] data)
  {
    this.RaiseListChangedEvents = false;
    foreach (var value in data)
      this.Add(ProjectResource.GetResource(value));
    this.RaiseListChangedEvents = true;
  }

Remember that it is an array of Assignment objects from the data access assembly that is passed as a parameter. This code loops through the items in that array, adding a child object to the collection for each item. Each child object is created by calling a GetResource() factory method, which I discuss later in the chapter.

The RaiseListChangedEvents property is set to false and then true to suppress the ListChanged events that would otherwise be raised as each item is added. The DataPortal_Insert() and DataPortal_Update() methods of Project call the field manager's UpdateChildren() method. That method finds each child object and calls DataPortal.UpdateChild() on that child object. The data portal calls Child_Update() on a child collection object, and that method is responsible for updating all the objects contained in the collection.

The BusinessListBase class already has an implementation of Child_Update() that does all the work. This means that a normal business collection doesn't have to be affected; it just works.

You can override Child_Update() if you need to do some extra processing in addition to updating the items in the collection.

I've now shown all the ProjectResources collection code.

ProjectResource

A Project contains a child collection: ProjectResources. The ProjectResources collection contains ProjectResource objects. As designed in Chapter 3, each ProjectResource object represents a resource that has been assigned to the project.

Also remember from Chapter 3 that ProjectResource shares some behaviors with ResourceAssignment and those common behaviors are factored out into an Assignment object. As you look through the code in ProjectResource, you'll see calls to the behaviors in Assignment, as ProjectResource collaborates with that other object to implement its own behaviors.

Factory Methods

Like ProjectResources, this object has two factory methods scoped as internal. These methods are intended for use only by the parent object, ProjectResources:

internal static ProjectResource NewProjectResource(int resourceId)
  {
    return DataPortal.CreateChild<ProjectResource>(
      resourceId, RoleList.DefaultRole());
  }

  internal static ProjectResource GetResource(
    ProjectTracker.DalLinq.Assignment data)
  {
    return DataPortal.FetchChild<ProjectResource>(data);
  }

  private ProjectResource()
  { /* require use of factory methods */ }

The NewProjectResource() factory method accepts a resourceId value as a parameter. That value is used to retrieve the corresponding Resource object from the database with a call to CreateChild() that results in invocation of Child_Create().

Two parameters are passed to the CreateChild() method, the resourceId parameter (the ID of the resource being assigned to the project), and the role the resource will fill on the project. This second value is provided by calling the DefaultRole() method on the RoleList class from Chapter 17.

The GetResource() factory method is called by ProjectResources as it is being loaded with data from the database. Recall that ProjectResources gets an array of Assignment data access objects and loops through all the rows in that array, creating a new ProjectResource for each item. To do this, it calls the GetResource() factory method, passing the item as a parameter. This parameter is passed to the FetchChild() method of the data portal, which invokes Child_Fetch().

Using the data portal's CreateChild() and FetchChild() methods allows the data portal to automatically set the child object's IsChild, IsNew, and IsDirty properties.

Data Access

The Data Access region contains the code to initialize a new instance of the class when created as a new object or loaded from the database. It also contains methods to insert, update, and delete the object's data in the database.

Creating a New Object

When a Resource is assigned to a Project, a new ProjectResource object must be added to the ProjectResources collection. This process starts with the Assign() method of the ProjectResources collection, which invokes the NewProjectResource() factory method in the ProjectResource class, passing the Resource object's Id value as a parameter.

The NewProjectResource() factory method calls the data portal's CreateChild() method, which causes the data portal to create a new object and invoke the Child_Create() method:

private void Child_Create(int resourceId, int role)
  {
    var res = Resource.GetResource(resourceId);
    LoadProperty(ResourceIdProperty, res.Id);
    LoadProperty(LastNameProperty, res.LastName);
    LoadProperty(FirstNameProperty, res.FirstName);
    LoadProperty(AssignedProperty, Assignment.GetDefaultAssignedDate());
    LoadProperty(RoleProperty, role);
  }

This method is a bit different than the other methods so far because it has to initialize the new object with existing data, specifically with details about the resource being assigned to the project. To get these details, it retrieves a Resource object by calling the GetResource() factory method.

Note

Technically, this approach is a misuse of the Resource object. The responsibility of the Resource object is to edit resource data, and here I'm using it just to retrieve data. In this particular case, I know that there's no extra cost to using the existing Resource class because I'm using all its fields. However, if I were just using three out of dozens of properties, I'd create a read-only root object to retrieve only the data I needed.

The values from the Resource object are used to populate the new ProjectResource object so it represents the fact that a resource has been assigned to the project.

Loading an Existing Object

When a Project is being loaded from the database, it calls ProjectResources to load all the child objects. ProjectResources loops through all the items in the Assignment data access object array supplied by Project, creating a ProjectResource child object for each item. That item is ultimately passed into the Child_Fetch() method where the object's fields are set:

private void Child_Fetch(ProjectTracker.DalLinq.Assignment data)
  {
    LoadProperty(ResourceIdProperty, data.ResourceId);
    LoadProperty(LastNameProperty, data.Resource.LastName);
    LoadProperty(FirstNameProperty, data.Resource.FirstName);
    LoadProperty(AssignedProperty, data.Assigned);
    LoadProperty(RoleProperty, data.Role);
    _timestamp = data.LastChanged.ToArray();
  }

This code is very similar to the code in Project to load the object's fields from the data access object. Each property's value is loaded, along with the timestamp value for this row in the database, thus enabling implementation of first-write-wins optimistic concurrency for the child objects as well as the Project object itself.

Once the object has been populated with data directly from the database, it is not new or dirty. The data portal automatically calls the object's MarkOld() method to set the IsNew and IsDirty property values to false before invoking the Child_Fetch() method to ensure these values are correct and to allow your object to use the properties if desired.

Inserting Data

When ProjectResources is asked to update its data into the database, it's Child_Update() method loops through all the child objects. Any child objects with IsDeleted set to false and IsNew set to true have their Insert() method called. The child object is responsible for inserting its own data into the database:

private void Child_Insert(Project project)
  {
    _timestamp = Assignment.AddAssignment(
      project.Id,
      ReadProperty(ResourceIdProperty),
      ReadProperty(AssignedProperty),
      ReadProperty(RoleProperty));
  }

In Chapter 3, the object design process reveals that ProjectResource and ResourceAssignment both create a relationship between a project and a resource using the same data in the same way. Due to this, the Insert() method delegates most of its work to an AddAssignment() method in the Assignment class.

Looking at the Assignment class, you can see the AddAssignment() method:

public static byte[] AddAssignment(
    Guid projectId,
    int resourceId,
    SmartDate assigned,
    int role)
  {
    using (var ctx =
      ContextManager<ProjectTracker.DalLinq.PTrackerDataContext>.
      GetManager(ProjectTracker.DalLinq.Database.PTracker))
    {
      System.Data.Linq.Binary lastChanged = null;
      ctx.DataContext.addAssignment(
        projectId,
        resourceId,
        assigned,
        role,
        ref lastChanged);
      return lastChanged.ToArray();
    }
  }

This method simply calls the addAssignment stored procedure by using LINQ to SQL. It is centralized in the Assignment class because this same code is required by both ProjectResource and ResourceAssignment, so I've normalized the behavior into this one location.

I want to call your attention to an important fact: this method uses the ContextManager to get access to the LINQ to SQL context object. Remember that this method is called because the root Project object is being inserted or updated and that the field manager's UpdateChildren() method is invoked inside the using block for the ContextManager in the Project class.

This is important because this call to the ContextManager returns the preexisting context object, not a new one. That first call, in the Project class, creates and opens a new context and database connection. That one context is reused by any other objects until that top-level using block exits and the context object is disposed.

Because the Project object's DataPortal_Insert() and DataPortal_Update() methods are marked with the Transactional attribute, sharing the same database connection for all database interaction is necessary to avoid having System.Transactions accidentally use the DTC. In other words, reusing the existing context is a major boost to performance.

Updating Data

The method is very similar to Child_Insert(). It delegates the call to the UpdateAssignment() method in the Assignment class. This is because the data updated by ProjectResource is the same as ResourceAssignment, so the common behavior is factored out into the Assignment class.

Deleting Data

Finally, there's the Child_DeleteSelf() method. Like Child_Update() and Child_Insert(), it too delegates the work to the Assignment class.

This completes the ProjectResource class and really the whole Project object graph. You should now have an understanding of how to persist an editable root object, a child list, and an editable child, some of the most common stereotypes in most business applications.

RoleList

The RoleList object is used by Project, ProjectResources, ProjectResource, and Assignment. This is a name/value list based on the Roles table in Chapter 3. The name (key) values are of type int, while the values are the string names of each role.

The CSLA .NET framework includes the NameValueListBase class to help simplify the creation of name/value list objects. Such objects are so common in business applications that it is worth having a base class to support this one specialized scenario.

Factory Methods

As in the template in Chapter 5, RoleList implements a form of caching to minimize load on the database. The GetList() factory method stores the collection in a static field and returns it if the object has already been loaded. It only goes to the database if the cache field is null:

private static RoleList _list;

  public static RoleList GetList()
  {
    if (_list == null)
      _list = DataPortal.Fetch<RoleList>();
    return _list;
  }

Note

If you need to filter the name/value list results, you'll need to pass a criteria object as a parameter to the Fetch() method just like you would with any other root object.

In case the cache needs to be flushed at some point, there's also an InvalidateCache() method:

public static void InvalidateCache()
  {
    _list = null;
  }

By setting the static cache value to null, the cache is reset. The next time any code calls the GetList() method, the collection is reloaded from the database. This InvalidateCache() method will be called by the Roles collection later in the chapter.

Of course, there's also a non-public constructor in the class to enforce the use of the factory method to retrieve the object.

Data Access

Finally, there's the DataPortal_Fetch() method that loads the data from the database into the collection:

private void DataPortal_Fetch()
  {
    this.RaiseListChangedEvents = false;
    using (var ctx =
      ContextManager<ProjectTracker.DalLinq.PTrackerDataContext>.
      GetManager(ProjectTracker.DalLinq.Database.PTracker))
    {
      var data = from role in ctx.DataContext.Roles
                 select new NameValuePair(role.Id, role.Name);
      IsReadOnly = false;
      this.AddRange(data);
      IsReadOnly = true;
    }
    this.RaiseListChangedEvents = true;
  }

As with the DataPortal_Fetch() method in Project, the code here gets a LINQ to SQL context object and uses it to define a LINQ query. This query is a bit different, however, because it directly creates a list of NameValuePair objects, populated with the data from the database. That list of objects is then added to the RoleList collection by calling the AddRange() method.

Since the collection is normally read-only, the IsReadOnly property is set to false before loading the data and then restored to true once the data has been loaded.

The result is a fully populated name/value list containing the data from the Roles table in the database.

At this point you should understand how a name-value list is loaded from the database and how you can use simple caching with a static field to improve performance.

ProjectList and ResourceList

The ProjectList and ResourceList classes are both read-only collections of read-only data. They exist to provide the UI with an efficient way to get a list of projects and resources for display to the user.

On the surface, it might seem that you could simply retrieve a collection of Project or Resource objects and display their data. But that would mean retrieving a lot of data that the user may never use. Instead, it's more efficient to retrieve a small set of read-only objects for display purposes and then retrieve an actual Project or Resource object once the user has chosen which one to use.

The CSLA .NET framework includes the ReadOnlyListBase class, which is designed specifically to support this type of read-only list. Such a collection typically contains objects that inherit from ReadOnlyBase.

Because these two read-only collections are so similar in implementation, I'm only going to walk through the ResourceList class in this chapter. You can look at the code for ProjectList in the code download for this book.

Factory Methods

The ResourceList collection exposes two factory methods, EmptyList() and GetResourceList():

public static ResourceList EmptyList()
  {
    return new ResourceList();
  }

  public static ResourceList GetResourceList()
  {
    return DataPortal.Fetch<ResourceList>();
  }

  private ResourceList()
  { /* require use of factory methods */ }

The GetResourceList() factory method simply uses the data portal to retrieve the list of data.

Data Access

The GetResourceList() factory method calls the data portal, which in turn ultimately calls the ResourceList object's DataPortal_Fetch() method to load the collection with data:

private void DataPortal_Fetch()
  {
    RaiseListChangedEvents = false;
    using (var ctx =
      ContextManager<ProjectTracker.DalLinq.PTrackerDataContext>.
      GetManager(ProjectTracker.DalLinq.Database.PTracker))
    {
      var data = from r in ctx.DataContext.Resources
                 select new ResourceInfo(r.Id, r.LastName, r.FirstName);
      IsReadOnly = false;
      this.AddRange(data);
      IsReadOnly = true;
    }
    RaiseListChangedEvents = true;
    }

It gets a LINQ to SQL context object and uses it to define a LINQ query. Similar to the RoleList class, querying it directly creates a list of ResourceInfo objects. The ResourceInfo class has an internal constructor to make this easy:

internal ResourceInfo(int id, string lastname, string firstname)
  {
    _id = id;
    _name = string.Format("{0}, {1}", lastname, firstname);
  }

Those ResourceInfo objects are then added to the ResourceList collection using the AddRange() method.

Since ResourceList is a read-only collection, the IsReadOnly property is set to false before loading the data and true once the loading is complete.

The end result is a fully populated list of the resources in the database that can be displayed to the user by the UI.

Roles

The RoleList object provides a read-only, cached list of roles that a resource can hold when assigned to a project. But that list of roles needs to be maintained, and that is the purpose behind the Roles collection. This is an editable root collection that contains a list of editable child Role objects.

Factory Methods

The Factory Methods region implements a GetRoles() factory method, which just calls the data portal like the other factory methods you've seen. It also implements a non-public constructor to require use of the factory method.

But the constructor contains code that is quite important:

private Roles()
    {
      this.Saved += Roles_Saved;
      this.AllowNew = true;
    }

AllowNew is a protected property defined by BindingList, the base class of BusinessListBase. Setting this to true allows WPF and Windows Forms data binding to automatically add new child objects to the collection. Typically, this happens when the collection is bound to an editable grid control in the UI. Table 18-2 lists the properties you can use to control the behavior of an editable collection.

Table 18.2. Properties Used to Control an Editable Collection

Property

Description

AllowNew

If true, data binding can automatically add new child objects to the collection. It requires that you also override the AddNewCore() method. It defaults to false.

AllowRemove

If true, data binding can automatically remove items from the collection. It defaults to true.

AllowEdit

If true, data binding allows in-place editing of child objects in a grid control. It defaults to true.

Though a collection can opt to implement a static delete method to delete all the items in the database, that isn't a requirement for Roles, so it doesn't have such a method.

An event handler is also set up for the Saved event. This is so the RoleList cache can be invalidated any time the Roles collection has been saved. I discuss the Roles_Saved() method in the "Invalidating the Client-Side Cache" section.

Data Access

The Data Access region has some rather unique code. The reason for this is not that the collection is an editable root but rather that the Roles collection needs to invalidate the cache of any RoleList object when the list of roles is changed. In other words, when Save() is called on a Roles collection, any cached RoleList object must be reloaded from the database to get the new values.

Other than this requirement, the data access code is quite straightforward, so let's focus on the cache invalidation code.

Invalidating the Client-Side Cache

In the constructor, the Roles class hooks up the Roles_Saved() method to handle the Saved event. That event is raised when the Roles object has been saved, thus indicating that the list of role data has changed. In this event handler, the RoleList cache is invalidated:

private void Roles_Saved(
      object sender, Csla.Core.SavedEventArgs e)
    {
      RoleList.InvalidateCache();
    }

Obviously, changing the Roles collection also changes RoleList, so invalidating the cache ensures the application is using current values. This code ensures that the client-side cache is invalidated by calling RoleList.InvalidatedCache().

Invalidating the Server-Side Cache

It is important to realize that there could be a cached RoleList collection on both the client and the server. Keep in mind that CSLA .NET enables mobile objects and that means that business object code can run on the client and on the server. If a business object has server-side code that uses a RoleList, it will cause a RoleList object to be created and cached on the server.

If you look at the ValidRole() rule method in Assignment, you'll see that it calls the GetList() factory on RoleList, loading a list of roles. If any business rule validation occurs for either a ProjectResource or ResourceAssignment object on the server, it would cause the list to be loaded and cached on the server. Though this doesn't occur in ProjectTracker, it is a very common scenario in many applications.

The great thing about the way the mobile objects work is that caching the RoleList on the client and server is automatic. You'll note that there's no special code to make that happen. But it does mean a bit of extra work in the Roles collection to ensure that any server-side cache is also flushed.

Recall from Chapter 15 that the data portal optionally invokes DataPortal_OnDataPortalInvoke() and DataPortal_OnDataPortalInvokeComplete() methods if your business object implements them. The former is invoked before any DataPortal_XYZ method is called, and the latter is invoked afterward. You can use this method to run code on the server after the DataPortal_Update() method is complete:

protected override void DataPortal_OnDataPortalInvokeComplete(
    DataPortalEventArgs e)
  {
    if (ApplicationContext.ExecutionLocation ==
        ApplicationContext.ExecutionLocations.Server &&
          e.Operation == DataPortalOperations.Update)
    {
      RoleList.InvalidateCache();
    }
  }

Of course, the data portal could be configured to run the "server-side" code locally in the client process, in which case there's no point invalidating the cache here because the Saved event handler will take care of it. That's why the code checks the ExecutionLocation and Operation to see if it's actually an Update (which indicates an insert, update, or delete in the list) operation running on an application server. If so, it calls RoleList.InvalidateCache() to invalidate any server-side cache of role data.

Other than dealing with the RoleList cache, the data access code in Roles and Role is like the data access code you saw earlier in this chapter.

Implementing Exists Methods

I have covered all the code in the Project class except for the Exists() command implementation. Many objects can benefit from implementation of an Exists() command, as it allows the UI to quickly and easily determine whether a given object's data is in the database without having to fully instantiate the object itself. Ideally, a UI developer could write conditional code like this:

if (Project.Exists(productId))

Implementing an Exists() command also provides an opportunity to make use of Csla.CommandBase to create a command object. This makes sense because all an Exists() command needs to do is run a stored procedure in the database and report on the result.

Exists Method

The Project class itself has a static method called Exists(), which is public so it can be called from UI code:

public static bool Exists(Guid id)
  {
    return ExistsCommand.Exists(id);
  }

This method simply delegates all its work to a factory method in the ExistsCommand class. What is interesting is that the ExistsCommand class is private to Project. The only way for the UI or other code to interact with this command object is through the static method on the Project class. This allows the application to implement powerful functionality, without expanding its public API.

ExistsCommand Class

The real work occurs in the command object itself: ExistsCommand. The ExistsCommand class inherits from Csla.CommandBase and is declared as a private nested class within Project:

[Serializable]
    private class ExistsCommand : CommandBase

Not all command objects are nested within other business classes, but in this case it makes sense. There's no need for UI developers to be aware of the ExistsCommand class or its implementation details; they only need to know about the Project.Exists() method.

In other cases, you may have public command objects that are directly used by the UI. A good example is a ShipOrder object that is responsible for shipping a sales order. It is quite realistic to expect that the UI would want to directly ship a sales order, so there's value in being able to call a ShipOrder.Ship(orderId) method.

Command objects, whether public or private, tend to be very simplistic in terms of their structure. ExistsCommand declares some instance fields, one property, and a constructor:

private Guid _id;
    private bool _exists;

    public bool ProjectExists
    {
      get { return _exists; }
    }
public ExistsCommand(Guid id)
    {
      _id = id;
    }

The constructor initializes the _id field so that value is available when the command is executed on the server. The _exists field is set as a result of the command running on the server and is exposed through the ProjectExists property.

Factory Methods

The Exists() factory method is much like the other factory methods in this chapter except that it calls the data portal's Execute() method:

public static bool Exists(Guid id)
    {
      ExistsCommand result = DataPortal.Execute<ExistsCommand>(
        new ExistsCommand(id));
      return result.ProjectExists;
    }

The data portal Execute() method invokes the DataPortal_Execute() method on the command object on the application server.

The parameter passed to the Execute() method is a new instance of ExistsCommand. In other words, this factory method creates an instance of the command object, sets its _id value through the constructor, and passes the command object to the application server through the data portal.

The result of Execute() is an updated command object that contains the result value in the ProjectExists property. That value is returned as the result of the static factory method, indicating whether the project exists or not.

Data Access

The code that runs on the server is entirely contained within the DataPortal_Execute() method:

protected override void DataPortal_Execute()
    {
      using (var ctx =
        ContextManager<ProjectTracker.DalLinq.PTrackerDataContext>.
        GetManager(ProjectTracker.DalLinq.Database.PTracker))
      {
        _exists = ((from p in ctx.DataContext.Projects
                    where p.Id == _id
                    select p).Count() > 0);
      }
    }

Of course, the code in DataPortal_Execute() could be as complex as you require. It might create and interact with business objects on the server, or it might use server-side resources such as the file system, a high-powered CPU, or specialized third-party hardware, or software installed on the server. In this case, the code works directly against the database to determine whether the data exists in the database:

_exists = ((from p in ctx.DataContext.Projects
                            where p.Id == _id
                            select p).Count() > 0);

Really, the data portal does most of the hard work with command objects. When DataPortal.Execute() is called on the client, the command object is copied to the server and its DataPortal_Execute() method is invoked. Once that method completes, the data portal copies the object back to the client, thus allowing the client to get any information out of the command object. This means that all instance fields declared in the class must refer to types that are Serializable.

The Exists() command in the Resource class is implemented in the same manner.

At this point, you should understand how all the business objects in ProjectTracker.Library are implemented.

Conclusion

This chapter completes the discussion of the ProjectTracker.Library code started in Chapter 17. The focus in this chapter was on data access and object persistence, exploring the various ways to use the CSLA .NET data portal, and walking through one sample implementation.

The ProjectTracker.Library business library supports all the interface projects included in the ProjectTracker solution:

  • WPF

  • Windows Forms

  • Web Forms

  • Web Services

  • WCF services

  • Workflow

This really illustrates the power of CSLA .NET by showing a single business library that supports all these different application types at the same time. The final chapters in this book focus on the WPF, Web Forms, and WCF services interfaces.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset