Extending the KIE Workbench architecture

The KIE Workbench was thought to be a pluggable, extensible development environment and runtime for our jBPM6 applications. We will start discussing some of the most requested changes required for enterprise use, in order to serve as both utilities for your necessities and inspiration towards which changes can be applied to the application to make a customized version of it.

We will discuss the following integration topics:

  • Adding a SOAP web service interface to the KIE Workbench to expose jBPM6 as web services
  • Adding custom work item handlers that will be the default for all the runtimes managed in the KIE Workbench
  • Remote invocations considering preexisting services and the runtime engine API (covered in Chapter 7, Defining Your Environment with the Runtime Manager)
  • Considerations about deploying in the cloud, with specific demonstrations for deploying in OpenShift (http://www.openshift.com)

All integrations that follow are inside a KIE Workbench distribution project called kie-wb-edition, available in the chapter's code bundle. It is a Maven project prepared to use the assembly plugin to modify the original KIE Workbench, prepared to run on JBoss Application Server, in order to have our own extra modules and modifications to its internal configuration files. Checkout the assembly-kie-wb-edition.xml file under src/main/assembly/ to see what changes are being introduced in the project. We will go through each one of them in detail as we explain the reason and structure of each configuration.

Web service addition

One very common request that most companies have due to internal policies is to provide a SOAP-based web service interface to interact with the BPM system. The KIE Workbench provides a RESTful interface by default, mainly because it can be accessed through web service clients as well as from other tools, such as JavaScript client APIs or mobile devices. Adding a SOAP-based web service, however, is far simpler than it would seem.

In the code bundle of this chapter, there is a project called web-service-module. Inside this project, you will find the definition for a simple web service. It uses a rather simple interface, which, for the simplicity of demonstration, only exposes two important methods: the startProcess and signalEvent methods of the KIE session. It defines a JAX-WS-based web service that will interact directly with a runtime engine (specified by the release ID passed as a parameter to the web service invocations).

Its configuration is rather simple. To have it added to our KIE Workbench, we need to do the following two things:

  1. Add the compiled JAR file from the web-service-module project to the lib folder under WEB-INF of the KIE Workbench.
  2. Add the corresponding servlet mapping to the WEB-INF/web.xml file of the KIE Workbench:
    <servlet>
        <display-name>rmWebService</display-name>
        <servlet-name>rmWebService</servlet-name>
        <servlet-class>com.wordpress.marianbuenosayres.service.RuntimeManagerWebService</servlet-class>
    	</servlet>
    <servlet-mapping>
        <servlet-name>rmWebService</servlet-name>
        <url-pattern>
            /RuntimeManagerWebServiceImpl
        </url-pattern>
    </servlet-mapping>

Once that configuration is added to the application, we will have the web service exposing our internal services. As the web service is going to look for any runtime manager configured in the same environment from the specified release ID in the invocation, we will need no extra configuration to run its code. The signalEvent method of the RuntimeManagerWebServiceImpl class can be simplified into the following code snippet:

public void signalEventAll(String releaseId, String signalRef) {
    RuntimeEngine engine = RuntimeManagerRegistry.
        getManager(releaseId).getRuntimeEngine(
            EmptyContext.get());
    if (engine != null) {
        engine.getKieSession().signalEvent(signalRef, null);
    }
}

As shown in the preceding code snippet, when we have a preexisting runtime manager, we get a runtime engine from it and fire the signalEvent method to its KIE session object. These configurations, as we mentioned previously, are automatically added to a WAR file when compiling the kie-wb-edition project.

Work item handler default configurations

This is a powerful trick for reconfiguring our KIE Workbench environment to have our own default work item handlers, and it works for both the KIE Workbench as well as for any other jBPM6- or Drools-based application that uses a KIE session. There is a configuration file, called drools.session.conf, which defines the internal configurations of some of the components in a KIE session. By default, this drools.session.conf file has the following content inside the classpath:

drools.workItemHandlers = CustomWorkItemHandlers.conf

What this configuration has is a set of default values for our KIE session configuration objects. The drools.workItemHandlers property will allow us to define an MVEL file path, relative to the drools.session.conf file, which will contain a map of our WorkItemHandler instances indexed by their registry key. This MVEL map will be used by the KIE sessions constructed in its specific environment to prepopulate its work item handlers. We will use it, along with specific work item handlers added to our classpath, to do the following:

  • Override existing configurations for task behavioral definitions
  • Make our own configurations for new task types

We can make use of work item handlers to create interactions with pretty much any form of external system. Also, by adding the jbpm-workitems dependency to our project, we can create or extend from a wide range of previously existing work item handlers that allow us to access web services, RSS feeds, interact with filesystems, FTP servers, databases, Java beans, and many more components, as follows:

<dependency>
    <groupId>org.jbpm</groupId>
    <artifactId>jbpm-workitems</artifactId>
    <version>6.1.0.Beta3</version>
</dependency>

This dependency is available inside the KIE Workbench libraries, so you can use them to extend configurations without even writing a new WorkItemHandler definition.

We have defined a CustomWorkItemHandlers.conf file with extra features for our custom KIE Workbench edition. In it, you can find definitions that come from the project custom-work-item-handlers. There is one more component in the said project that implements the org.kie.internal.executor.api.Command interface, which we are going to discuss in detail in the next section.

Executor service commands

Configuring WorkItemHandlers is a great way of creating interactions with external systems. However, the more complex those interactions are, the more time the thread that interacts with the process will be waiting for the task to finish. This is a natural consequence of complex executions; they do take time. However, having a behavior that is detached from the actual process execution in our tasks will allow us to invoke more process executions with fewer resources. In order to provide this detached, asynchronous management of tasks, we will use the following three components:

  • The org.kie.internal.executor.api.Command interface, which provides a new way of writing external interactions
  • The ExecutorService class, which creates a managed thread pool for executing the said commands
  • The AsyncWorkItemHandler class, which ends up connecting the process execution with the ExecutorService method in an asynchronous fashion

The Command interface is a very simple one. It provides an execute method that will receive a Context object and return an ExecutionResults object, as follows:

public interface Command {
    ExecutionResults execute(Context ctx) throws Exception;
}

The interface isn't just meant to work with WorkItemHandlers, but with anything that requires a pluggable and pooled asynchronous behavior. In order to use it as an interaction with process tasks, we will have a variable in the context marked by the key workItem to access the parameters of the task. The custom-work-item-handlers project in this chapter's code bundle defines a very simple command for handling a specific domain task:

public ExecutionResults execute(CommandContext c)throws Exception{
    WorkItem wi = (WorkItem) context.getData("workItem");
    Object domainXParameter = wi.getParameter("domainXParameter");
    //Your specific domain operations should go here
    ExecutionResults results = new ExecutionResults();
    results.setData("domainXResult", domainXParameter);
    return results;
}

As you can see, the ExecutionResults object has a setData method where we can add specific output for our commands. The specific work item handler prepared to pass tasks to the executor service, called AsyncWorkItemHandler, will take these parameters to match each result with a task output.

Overall, the ExecutorService object, the specific handlers, and the command it will use can all be specified through a single line of code. This is clearly visible in the CustomWorkItemHandlers.conf file that can be found in the kie-wb-edition project:

[
...
  "DomainX" : new com.wordpress.marianbuenosayres.handlers.
        DomainXWorkItemHandler(),
  "DomainXAsync" : new org.jbpm.executor.impl.wih.
       AsyncWorkItemHandler(
           org.jbpm.executor.ExecutorServiceFactory.
               newExecutorService(
                   javax.persistence.Persistence.
                       createEntityManagerFactory(
                           "org.jbpm.domain")
               ),
           "com.wordpress.marianbuenosayres.handlers." +
                "DomainXExecCommand"
      )
]

The line for using the AsyncWorkItemHandler component looks a bit complicated, but it does a lot in a single line. First, it creates a JPA persistence manager, then it creates the executor service, and then it passes it to the AsyncWorkItemHandler component along with the name of the Command class to execute its tasks.

The main advantage of using the executor service is that it provides the possibility of handling many more concurrent calls that interact with process executions. Whenever we invoke an interaction with a process instance (without the executor service strategy), we will be hanging a thread until the automatic tasks involved in said process are finished or reach a wait state. So, if we have 10 users invoking processes with a task as the first step that takes 2 seconds to process, we will have 10 threads hung up for 2 seconds each.

With the executor service, this is far more manageable, because passing tasks to the executor service takes barely any time, and it will queue tasks for deferred execution until a thread inside its pool is made available. This translates to a far more scalable situation for our previous case, because we will have 10 threads hung up for very few milliseconds (the time it takes to queue a task), and then a limited number of threads solving elements from that queue. The following diagram shows how a server that uses the executor service (at the bottom of the diagram) scales better on high concurrency situations than the one that solves tasks using a synchronous work item handler (at the top of the diagram):

Executor service commands

The preceding diagram shows how, even with fewer threads, the response time decreases because the actual tasks are not being finished by the BPM system invocation. Instead, they're just queued for another group of threads to actually perform them. This strategy with an executor service will even provide a retry mechanism if the tasks fail, because it can be configured to retry each command a number of times if they throw an exception, or even leave them in a pending state until a solution to the failure can be found.

This management issue becomes much more important when those 10 invocations a second become a hundred, or a thousand. The processes that use an asynchronous mechanism, such as the executor service, will scale far better when high concurrency is used.

KIE Session Sharing Considerations

Finally, another consideration to have when running on highly concurrent environments is one further consequence of sharing a KIE session between many process instances. Due to the way the persistence is configured (See Chapter 8, Implementing Persistence and Transactions), it will be able to recover and continue the KIE session execution from another thread, but only one thread at a time will be able to manage invocations to a KIE session. This is because they will edit the same tuple in the database (for the SessionInfo table) and internal locks on the database connection configuration will either throw an exception or lock the tuple in the database. Either way, only one thread will successfully access a KIE session at a time to avoid data change collisions.

When considering high concurrency environments, we will need to take this situation into account, as two process instances that share the same KIE Session won't be able to execute from two different threads at the same time. On a single standalone server, this could be managed with a synchronized block around a KIE session method invocation; however, with multiple nodes, this could become a problem that could occur a certain number of times.

In order to avoid this error, the implementation of the persistence for Drools provides an OptimisticLockRetryInterceptor component, that when a concurrent modification problem arises, hangs the latest thread and retries the execution milliseconds afterwards. This usually saves the few cases where this problem could happen if you don't share the session too much. However, in order to avoid having this situation too often, we will need to consider partitioning our KIE sessions in a specific way, and try to see whether a per-process instance runtime manager could fit our needs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset