Chapter 3. Distributed Design

This chapter is dedicated to discussing how a distributed design of Orchestrators looks and can be designed. We will be discussing the following recipes:

  • Building an Orchestrator cluster
  • Load-balancing Orchestrator
  • Upgrading a cluster
  • Managing remote Orchestrator
  • Synchronizing Orchestrator elements between Orchestrator servers

Introduction

Why a full chapter dedicated to multiple Orchestrators? Well... since Orchestrator became the central turning stone for vRealize automation, there are more and more customers that distribute and protect their Orchestrator infrastructure.

We differentiate between different goals and scenarios, such as high availability (HA), workload spreading, scaling out, and bandwidth optimization and localization. In the following sections I break this down into the three most common forms: cluster, distributed, and scale out designs.

Tip

Please note that the vRealize automation (vRA) internal Orchestrator should not be clustered as described here. If you scale out vRA, you should consider using an external Orchestrator cluster, not the built-in vRA Orchestrator. See Chapter 13, Working with vRealize Automation for more information.

Cluster design

As Orchestrator becomes more and more production-critical for companies, it is a solid idea to cluster Orchestrator to guarantee that it's up and working. An Orchestrator cluster is most powerful when combined with a load-balancer. However, if you are only using Orchestrator to run workflows without any other input (headless), then you can use an Orchestrator cluster without a load-balancer and use the steps outlined in this chapter to make sure that workflows are started or logs are checked using a central Orchestrator controlling all other installations.

A typical situation where a clustered Orchestrator (with a load-balancer) is a very good idea is when the Orchestrator acts as a domain manager. What is meant by that is that Orchestrator is responsible for automating the VMware domain (all things vSphere) and another automation tool (such as Ansible, Chef, or something else) uses the Orchestrator workflows. The domain manager concept is another solution to the automation problem. Instead of using one tool (such as Orchestrator or vRA) to automate everything, you use tools specialized for their domain. Examples of domains are VMware, Microsoft, Red Hat, EMC or NetApp storage, and Cisco networking. In each of these domains, a tool exists that is specialized to deal with the automation of its domain. For Red Hat there is Satellite, for Microsoft there is SMS or SCOM, and so on. Each of these tools has a SOAP or REST interface that can be accessed by a general management tool. Orchestrator would be a domain manager for VMware.

The following pictures show how an Orchestrator cluster can look and how it can be accessed. Please note that the use of the vSphere client isn't supported in cluster mode.

Cluster design

Distributed design

When we talk about distributed, we mean that you have multiple Orchestrator installations that are not in the same place or not looking after the same things. For example, your main corporate data center sits in Europe and you have others in North America, Asia, and Oceania. If you have one Orchestrator sitting in Europe that manages all other centers, the result would be massive problems from various sources, such as bandwidth, time zones, workflow distribution, and versioning.

But that's not the only example. One can generally differentiate Orchestrator deployments into Geographically Distributed and Logically Distributed ones:

Distributed design

Geographically Distributed

The use of geographically dispersed Orchestrators is common in large companies. Here, a central Orchestrator instance executes workflows on remote environments. The amount of bandwidth used to execute a workflow remotely (using the multi-node plugin) is much less than the amount that would be needed to run the workflows directly. This is especially true when a lot of input variables have to be collected to run the workflow.

Logically Distributed

Logically Distributed means that your Orchestrators are located in different environments, such as production, development, and so on. In this case, you may have an Orchestrator infrastructure that creates and manages your different infrastructure, or is used for deployments or automation. Central management is then also quite important.

Please note that the remote Orchestrator doesn't necessarily have to be paired with a vCenter. A remote Orchestrator could be used to handle your server, storage, or any other add-on infrastructure services or hardware.

Scaling out

The last design deals with scaling out and discusses how to distribute workloads and how to deal with Orchestrator's limitations. There are cases where the maximum number of concurrent workflows running (300) is too small. One way to deal with this is to increase the limit (see the  recipe Control Center titbits in Chapter 2, Optimizing Orchestrator Configuration), but the better way is to scale your deployment.

There are two ways to do this. You either use a distributed design or use a cluster design, as seen in the following figure:

Scaling out

The central Orchestrator in both approaches is responsible for syncing workflows and settings between the actual working Orchestrators.

Central management

A central Orchestrator instance can be used to keep control of all the distributed installations.

A central Orchestrator server would be connected to all sub-Orchestrators using the multi-node plugin (also known as the VCO plugin). This will allow you to develop and then distribute your workflows centrally.

Using proxy workflows, you can run workflows on geographically remote sites without running into bandwidth or timing problems. You can also schedule the execution of workflows in remote locations to suit time zone differences.

Using the Orchestrator Control Center REST API, you can control the remote/distributed/clustered Orchestrators tidily and even automate their behavior.

In theory, it would be a quite a lot of work, but it's possible to create a workflow that deploys multiple Orchestrator instances using the vSphere plugin, then configure and cluster them using the Control Center API, and then create a load-balancer using the NSX plugin.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset