13

On-Premises and On-Cloud Deployments

This chapter describes the various deployment environments that are supported by IBM Cloud Pak for Business Automation. When procuring the Cloud Pak, you would have to choose which platform you want to run it on as well as the location, as the Cloud Pak can be run on-premises, that is, in your own data centers, or on the cloud of different vendors, which may provide different offers, including the management of the underlying machines or not. This chapter explores some of the pros and cons of each choice.

We’ll cover the following topics in this chapter:

  • Introducing the platform choices for Cloud Pak
  • Running the Cloud Pak in its own data centers
  • Running on the Z platform
  • Managed OpenShift environments on IBM Cloud, AWS, and Azure
  • IBM Managed Automation Service

Introducing the platform choices for Cloud Pak

Corporations have bought, built, and assembled various systems over time to tackle the operations of the objectives they’re set to reach. While doing so, technology evolved as well as the technical strategies put in place. Consequently, corporations typically end up with a patchwork of systems and underlying platforms.

When adopting technology such as IBM Cloud Pak for Business Automation, the question naturally arises of where to run this, and on which platform, and quite often, there isn’t a single one-size-fits-all answer. This chapter lists some of the potential platform choices.

Running the Cloud Pak in its own data centers

Traditionally, this has been the most common approach, even though the adoption of cloud-based and hybrid strategies is on the rise as organizations are shifting more and more to a cloud-based model. In this approach, the customer procures Cloud Pak, downloads it, and installs it onto their own hardware as part of the internal company network, where other systems are also available. Let’s explore in more detail what this entails.

Topologies

IBM Cloud Pak for Business Automation offers several installation options as part of two patterns, starter and production. The starter configuration is mostly geared toward demonstration and sandbox types of environments, whereas the production one, as the name implies, lends itself to further development and then rolls out to production usage.

If you choose the starter configuration and leave all features selected as offered by default, after proceeding through the installation and configuration steps, you will end up with an environment containing the following:

  • The authoring stack: This is the collection of tools (Business Automation Studio, Designers, and so on) that allows users to create and edit various automation artifacts, persist them into underlying storage, and govern their lifecycle.
  • A default runtime stack: This is a collection of runtimes whose purpose is to execute the automation artifacts developed using the authoring stack. They are the ones that provide the services, processes, and logic that actually integrate with other corporate systems and provide automation.

Within the starter configuration, Cloud Pak makes things easy by pre-wiring the default runtime stack into the authoring stack. This reduces the amount of initial configuration that is needed before being able to fully use the Cloud Pak and allows for seamless and transparent deployments from the authoring stack into the runtime stack. This way, when users are done authoring, they can try out their artifacts right away with a single click of a button (this is referred to as Playback in App Designer and Workflow Designer, and as Deploy and Run in Decision Designer).

This capability makes it easy for users to exercise the whole Cloud Pak right after installation, and thus gives them confidence that the artifacts they have developed are satisfying the automation needs of the company since they’re able to run those right away for testing purposes.

However, organizations will typically want to install and maintain several runtime environments. Quite often, you will find at least three:

  • One for development activities (for example, the default one), where automation artifact authors are free to deploy and validate their artifacts
  • One more for testing, where the artifacts of various authors are integrated with other external systems and fed with production-like data to validate correct behavior
  • Another is for production, where live traffic is actually directed, handling the actual automation for the business

This is a common setup, but the organization may create more environments for dedicated purposes, for example, performance testing, pre-production, user acceptance testing, and so on.

IBM Cloud Pak for Business Automation allows the running of the installer again with different options to set up those additional runtime environments, giving customers the ability to create topologies that match their requirements in terms of topologies and development/testing environments.

Please note the flow of artifacts between those environments and the tooling required to manage them is described in more detail in Chapter 15, Automating Your Operations and Other Considerations.

Internal user groups and IT teams

The installation and setup of environments for the execution of IBM Cloud Pak for Business Automation is usually performed by technical IT staff, while the users of the Cloud Pak are often closer to the business.

Depending on the organization, you can find a pattern where each department has its own IT staff. In such a case, the IT staff will be able to set up a topology of the Cloud Pak, that is, one authoring environment and multiple runtime environments, for the whole department. All of the automation of that department would then run on this topology – and of course, they can interact with systems from other departments, viewed as external systems.

This means that, in a large company composed of several departments, you would find one such topology per department, each installed and administered by the IT staff of that department. Consequently, they run independently of each other, and can even be set up with different versions or different layouts.

However, another pattern that is commonly found is that the organization is composed of several business departments, and a single IT team serving all departments. In such a case, the single IT team would have to make a choice between these two possibilities:

  • Set up and install one topology per department
  • Set up and install a single corporate-wide topology, shared across departments

If the IT team chooses the former (one topology per department), we’re basically in the same case as with a distributed IT model. Each department operates its automation on its topology, and communication between the automation of two departments is viewed as interaction with an external system (external to the department even though it’s still internal to the company). This model offers the independence of the topologies, thus isolating and shielding the departments from each other and enabling differentiated maintenance cycles, but it may create an additional burden for the IT team and additional complexity for service reuse. Note that, as IBM Cloud Pak for Business Automation runs on top of Red Hat OpenShift, IT teams can leverage the GitOps deployment tooling in order to alleviate this burden and manage the complexity that this topology entails. While this is outside of the scope of this book, you can refer to the OpenShift GitOps documentation (https://docs.openshift.com/container-platform/4.10/cicd/gitops/understanding-openshift-gitops.html) for further details.

Another approach is to set up and install a single topology to serve the needs of several departments. If the setup and maintenance aspects become simplified for the IT team, it creates new questions that need to be resolved. For example, how are users, teams, and permissions managed? Can a user of a department see and use the automation of another? It also introduces the risk that one department makes large use of the automation to the detriment of another whose available resources become lessened, so the IT team needs to think about resource arbitration and how to preserve a guaranteed bandwidth for each consumer.

So this approach introduces additional challenges; however, it may foster the reuse of automation across departments, thus giving more consistency to internal processes and decisions and reducing the number of duplicate developments.

Consequently, this choice needs to be made being conscious of the advantages and drawbacks each option yields, but also taking into account the nature of the business, the maturity and tooling of the IT team, and the company culture. It is worth noting that this choice is not all-or-nothing. It can evolve over time, for example, going from a single department using the Cloud Pak to a corporate, shared topology. Or it can be hybrid, for example, a shared topology for a few small departments, and a dedicated one for a specific department that requires it.

While the distributed platform is the most obvious choice for on-premises execution of the Cloud Pak, in some businesses and for some use cases, the mainframe environment retains a strong footprint, and can also be leveraged by the Cloud Pak, as is discussed in the next section.

Running the Cloud Pak on the mainframe

Mainframe computers have been around since the 1950s, and today, they still power 90% of all credit card transactions and a large number of the world’s IT production workloads. 70% of Fortune 500 companies are using mainframe systems. Since the 2000s, they run on z/OS, which allows the leverage of a Java Virtual Machine (JVM) alongside traditional Cobol programs. This opens the door to the integration of existing Cobol programs with features developed in IBM Cloud Pak for Business Automation.

While the whole Cloud Pak doesn’t support z/OS, one of its constituents, the Operational Decision Manager (ODM), does. This allows for adding decision-making capability to legacy Cobol applications.

From the standpoint of the business user in charge of modeling decisions, the fact that the target execution platform happens to be on the mainframe rather than on a distributed system is completely irrelevant, and they can continue to model decisions the same way as usual. In fact, a decision can be modeled once using the ODM designer and then executed on the distributed platform or the mainframe platform, or both, without any specific transformation.

From the standpoint of the Cobol application programmer, ODM exposes a native Cobol interface, thus allowing the call to the decision to be externalized from the application the same way as any other function call within a Cobol program, so the programmer doesn’t have to learn new technology to benefit from the decision making capability.

While all automation features are not available on the mainframe platform, the capability to run decisions on z/OS still greatly helps in a legacy modernization approach. It also fosters consistency of the corporate policies, since decisions can be taken likewise on both distributed and mainframe systems, leading to a consistent experience from the user standpoint, irrespective of the actual system they interact with.

This platform, as well as in the previously described case, are both run on-premises, which means on hardware procured, owned, and maintained by the customer. But Cloud Pak can also be run on hardware provided by external vendors, that is, cloud environments.

Managed OpenShift environments on IBM Cloud, AWS, and Azure

IBM Cloud Pak for Business Automation has been built to run on Red Hat OpenShift. Let’s clarify this environment.

A specific server, or set of servers, exposes its resources through an Operating System (OS), in this case, Red Hat Enterprise Linux. It can run many native programs, but the one we’re considering here is Kubernetes, which is a system that can orchestrate Docker containers and take care of their lifecycle management and scaling.

Yet, using Kubernetes directly remains relatively complex, and OpenShift offers an additional layer of management features that lets the user focus on application management and exposes developer and administrator views.

OpenShift is available in a variety of cloud environments, including the three main ones: IBM Cloud (Red Hat OpenShift Kubernetes Service (ROKS)), Amazon Web Services (Red Hat OpenShift Service on AWS (ROSA)), and Microsoft Azure (Azure Red Hat OpenShift (ARO)). At the time of writing, IBM Cloud Pak for Business Automation runs in the IBM Cloud environment and is being also ported and made available to the AWS Cloud, with Azure as the next target of development.

When a company chooses to run automation in the cloud, they get some benefits from this, such as the capability to commission more or less hardware or other resources to accommodate for workloads fluctuating over time, and having a vendor take care of the management and maintenance of all the supporting hardware.

However, they also get some additional complexity that needs to be pondered before making such a move. Typically, IBM Cloud Pak for Business Automation integrates with a lot of existing corporate systems, from procurement to client transactions to system of records to manufacturing, and so on. The question then naturally arises to know where these other systems are running, and where the company stands in its move-to-cloud journey. Rarely have all other systems moved to the cloud; in fact, some cannot. Consequently, quite often the approach will be hybrid.

Hybrid means that part of the overall solution runs on the cloud, and part of it still runs on-premises, with remote calls being performed from one to the other.

Two patterns are often used:

  • In the first pattern, the authoring is done on the cloud. The whole authoring stack is deployed onto a cloud environment, and users of Cloud Pak are performing both the authoring of automation artifacts as well as the initial validation (execution in the default development runtime environment) there. However, once artifacts are validated, they are downloaded by the IT operators to on-premises systems, and execution is performed there, closer to existing on-prem systems that it interacts with.

This pattern is not very disruptive to existing systems, since their interaction with automation is local. It doesn’t interject network latency between existing systems and automation runtimes, and thus the performance is preserved as it was. However, it doesn’t leverage the full benefit of the cloud, since the company still needs to provision on-premises all of the hardware resources that may be needed for peak execution time. So while authoring is moved to the cloud, and with it, a large community of users and the expected savings would not be realized.

  • The second hybrid pattern is that the runtime for automation artifacts is also running on the cloud, and in fact, we can still create several runtime environments there, as detailed at the beginning of this chapter. This means that the automation runtime will continue to integrate with existing systems, some of which would also have moved to the cloud, thus it will be a cloud-to-cloud call (that quite often can be optimized as a local call on the cloud vendor network space). Some other systems would not have moved to the cloud, so the call would remain remote. When such a pattern is followed, the organization needs to conduct a complete and precise study of the performance impact, both static and dynamic. Static, that is, in terms of additional network operations needed to accomplish a full business scenario, and dynamic in the sense that cloud-based workload will scale up and down more fluidly than an on-premises-based workload. So, attention needs to be taken to make sure that the on-premises limited capability to upscaling would not become a bottleneck when cloud environments are scaled up.

In such a cloud-based approach, the customer is relieved from the day-to-day procurement of the hardware needed to run the solutions, but they are still in charge of the management of Cloud Pak and the matching automation artifacts, for example, applying fixes, monitoring resource allocation, deploying security patches, and so on. Should they want to also outsource this, they need to look at managed services.

IBM managed automation service

IBM also offers the option to operate IBM Cloud Pak for Business Automation as a managed service. What this means is that IBM takes care of the following:

  • Procuring and provisioning the hardware resources required
  • Operating and maintaining those machines, including OS-level patches and security fixes
  • Installing and setting up the Cloud Pak environment, both authoring and runtimes, including configuration such as the link to the user repository (for example, corporate LDAP) and security certificates
  • Maintaining Cloud Pak by applying fixes and upgrading on an agreed-upon schedule
  • Operating Cloud Pak, monitoring resource usage, logging files, handling regular backups, and more generally speaking, proper operation

This also includes upgrading the Cloud Pak to upcoming releases at the time determined on a schedule agreed upon with the customer, at low peak hours (typically weekends/off business hours), with transparent migration of the data.

By default, Cloud Pak is installed with one Authoring environment, and three runtime environments: one for development (embedded with authoring for the Playback feature), one for testing, and one for production. Other topologies can be studied on request.

Such a pattern is very comfortable for the customer who only needs to make use of the Cloud Pak and doesn’t have to take care of all of the burden linked to operations. It also entails the execution of the automation artifacts on IBM Cloud, so they run away from the rest of the on-premises systems, with the impacts discussed in the previous section.

Note that it is also possible to gradually build up a hybrid approach, with some runtimes being part of the Managed Service offering, and some others being set up and operated by the customer on-premises, depending on the workload execution constraint. Those can then, later on, be transferred over to the Managed Service offering to reduce the burden of maintenance and operations on-premises.

Summary

As could be seen in the previous sections, IBM Cloud Pak for Business Automation offers a variety of possible integration environments. It can be set up and operated in on-premises environments (both onto distributed systems and mainframes), as well as in the cloud environments of the main vendors (with the list of supported vendors increasing over time), or even as Managed Service for carefree usage.

We also covered the criteria that should be weighted when selecting a deployment option and the large choice of possibilities makes it easier for a customer to find the solution (or solutions) that matches their business needs, including hybrid solutions and gradual moves from one possibility to another.

The next chapter will get into more detail, especially for the on-premises platform, about how to actually configure the Cloud Pak, what the possible topologies are, and how they can be built with high resiliency and capability to recover in case of disaster in the data center they run on.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset