Chapter 10. Integrating KIE Workbench with External Systems

In the previous chapters, we learned the specifics of using jBPM6 as an API, both by itself and integrated with other components such as business rules. We also learned about the tooling available in jBPM6, and how it is used. In this chapter, we will focus on the previously mentioned tools, not from an end user perspective, but as an administrator with the job of deciding whether this is the right tooling for our project or company.

Understanding architectures implies getting answers to two main questions: "how is an application composed?" and "why is it composed in such way?" We will have to understand all the internal components proposed by the KIE Workbench in order to evaluate them accordingly, and to define whether their purpose will meet our needs. We will split the chapter into the following sections:

  • Understanding the internal components of a jBPM6 project architecture
  • How the KIE Workbench is applied to those architectural components
  • Steps and examples on how to extend the KIE Workbench to meet our needs of BPMS middleware components

Defining your architecture

In order to define the architecture for a BPM system, we first need to understand the necessities such systems will have. There are many considerations to take into account when defining these requirements, and we will try to explain the main ones here.

First of all, the main purpose of the BPM systems is to provide an environment where process definitions can quickly change to adapt to a changing complex situation of the company domain, and how it will change to drive the company to its goals. This means we need a way to quickly define the change in our processes, in a manner that it can be notified and impacted quickly in the runtime. In order to provide a quick way to change the representation of our knowledge, we will use a repository strategy to quickly change content as well as to keep track of the changes introduced to the knowledge definitions used.

Secondly, we need to understand that even if initially just one or a few applications will use the BPM system, it will grow to be the centric point of access for all the company process definitions and runtimes. In order to cover such growth, the BPM system needs to provide a strategy to be distributed between multiple nodes. In order to provide such functionality, we need to handle our process instances through a persistence that should be synchronized with transactions. This can be done by using another repository strategy: in the KIE Workbench case, this is done with a JPA database.

One final concept we need to cover in a BPM system is the possibility of running as many tasks as possible in an asynchronous fashion. This will result in a more manageable environment, where threads are not just spawned on demand, but rather managed according to the environment capabilities. In the KIE Workbench (and any other jBPM6-based environment), there is a component called ExecutorService, which provides a org.kie.internal.executor.api.Command interface to give asynchronous executions seamlessly, in a way that it can handle failure retrial and thread pool management. Allocating how many threads can be used at the same time from processes will limit the chances of a BPM server from crashing due to excessive calls.

The overall structure of our BPM system architecture will look like the following diagram, considering the integration of all the mentioned components:

Defining your architecture

In order to make such an environment scalable, we need to be more specific about the particular strategy we pick to solve our nonfunctional requirements. These considerations will be covered in the following section.

Scalability considerations

Scalability is something to always consider when defining the internal components of a BPMS, mainly because it will be used or evolved into middleware that will later on be used by several other applications in our company, usually exceeding the initially defined requirement for the application. Even if you use it from your applications, the more you end up needing management in your BPM cycle, the more you need your processes to exist in an isolated, reusable environment where said life cycle is more controlled and configurable.

Once you reach this plateau where all your processes are managed through a common BPM system, you will feel the weight of many projects depending on the BPMS. Every application that needs to define a process execution and dynamic knowledge creation will at least consider the possibility of using the BPMS you will be defining now. You need to find a way to manage an ever growing demand for environment capacity.

Not only this, but also because of the dynamic nature of defining correlations between applications, the BPMS will have a responsibility to become an application coordinator. This would put a BPM system in two complex situations: at one side, it should be prepared to handle multiple requests, and at the same time, it should be able to quickly distribute them among many other applications without losing performance. This can be quite challenging if special considerations are not made in advance. The following correlation diagram, regardless of a good management, could transform a BPM system into a bottleneck in the overall enterprise architecture:

Scalability considerations

The KIE Workbench was created with such considerations in mind. The persistence for the process runtime is managed in a transactional fashion because it might need us to distribute the work across many servers. The process definition repository is defined as a virtual filesystem in order to manage the possibility of being shared between many nodes using the same APIs. Even the jBPM6 executor service is implemented using a database to manage queued tasks, considering the possibility of any other server in a grid being able to handle the tasks created by another node as soon as a thread is made available.

When defining architectures for our BPM systems, we need to take those considerations into account as a bare minimum, for they represent the natural progression of using the BPM discipline, even for projects that might start as modestly as having a few automated processes embedded in a single application.

Taking each of our nonfunctional requirements into account, we need to see whether the KIE Workbench (as it is distributed) or similar architectures are well-suited for our case. If not, do not despair as jBPM6 is—in its very core—just a process engine with configurable extensions to plug any sort of external system or functionality, and embeddable in any other kind of Java-based system.

We can use event listeners to publish auditing information to external systems and components, regarding the internal execution of our jBPM6 process engine. We can also configure work item handlers and executor commands to customize the way our tasks are executed. There are even pluggable systems for every internal component used in the KieServices class. If we change our classpath to have the Drools persistence JAR for Infinispan instead of the ones that implement JPA, we can change the way our system persists runtime information at its very core. Any different configurations can be thought of and created for any kind of environment specification. The following diagram shows a few different architectures based on distinct requirements:

Scalability considerations

In the preceding diagram, we can see a few possible combinations. The first one to the left-hand side of the diagram shows the configuration as it is available in the KIE Workbench using a Virtual File System (VFS) for knowledge definitions, a database to persist runtime information, and a REST interface to expose information to clients. At the center, we see a possible alternative where we use a NoSQL persistence for both our runtime and definitions, and a JMS broker to communicate with clients.

Finally, we see the simplest configuration, where definitions are stored in a plain filesystem as files, access is provided through the jBPM6 API, and no persistence is configured. This is the least scalable scenario, but it is also a possibility that might be fit for the simplest cases.

In the next section, we will discuss how we can start changing the KIE Workbench to adapt its internal architecture to our specific needs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset