In the first chapter of this book, we discussed the main challenges most organizations face in scaling their Kubernetes infrastructure in a multi- or hybrid-cloud world. New challenges arise when you deploy multiple clusters on different providers, such as the following:
In this chapter, we will introduce a great tool to help you address these challenges and alleviate the amount of work you and/or your team may need to deal with when managing several clusters: Red Hat Advanced Cluster Management (ACM).
Therefore, you will find the following topics covered in this chapter:
Note
The source code used in this chapter is available at https://github.com/PacktPublishing/OpenShift-Multi-Cluster-Management-Handbook/tree/main/chapter11.
Red Hat ACM is a complete solution for Kubernetes multi-cluster management from a single pane that includes some other great features, making complex and time-consuming tasks a lot easier. Red Hat ACM provides a few main features, listed here:
One of the great aspects of ACM is its multi-cluster architecture – it is designed to manage several clusters from a single standpoint, as you can see in the following figure.
Figure 11.1 – ACM hub and managed clusters
To do so, it uses the concept of hub and managed clusters, as follows:
We will dig into all these features in the following sections of this chapter.
In this section, we will guide you through the installation and configuration of ACM.
Important Note
It is important to consider that ACM uses the compute, memory, and storage resources of the hub cluster, thus it is recommended to have a dedicated cluster to be the hub for ACM, avoiding concurrent workloads and resource usage. This is recommended but not required; you can run ACM in any OpenShift cluster with enough capacity.
The installation process is simple, similar to what we followed in the last chapters with OpenShift Pipelines and GitOps, as you can see in this section.
Follow this process to install Red Hat Advanced Cluster Management:
Figure 11.2 – OperatorHub
Figure 11.3 – Advanced Cluster Management for Kubernetes on OperatorHub
Figure 11.4 – Installing Advanced Cluster Management for Kubernetes
Figure 11.5 – Installing the operator
Figure 11.6 – Operator installed
Now that we have the operator installed, we can go ahead and deploy a new MultiClusterHub instance:
Figure 11.7 – Create MultiClusterHub
Figure 11.8 – Installing MultiClusterHub
Figure 11.9 – MultiClusterHub running
Figure 11.10 – Advanced Cluster Management option
Figure 11.11 – Red Hat ACM login
You have now Red Hat ACM installed and ready to be used.
Figure 11.12 – Red Hat ACM initial page
Continue to the next section to learn more about the ACM cluster management feature.
As we mentioned previously, one of the features ACM provides is cluster management. The following is a list of some of the operations you can perform with ACM:
Check the Further reading section of this chapter to see a link to a complete list of supported operations.
We will not cover how to do all the operations you can perform with ACM in this book, but we will guide you through the process of provisioning a new OpenShift cluster on AWS using ACM, to give you an idea of how easy it is to use the tool.
Currently, in version 2.5, ACM can deploy clusters on AWS, Azure, Google Cloud, VMware vSphere, bare metal, Red Hat OpenStack, and Red Hat Virtualization. To do so, you need to first input the provider credentials to be used by ACM during the provisioning process. The following steps show how to add AWS credentials that will be used with our sample:
Figure 11.13 – Adding provider credentials
Figure 11.14 – Selecting the credential type
Figure 11.15 – Basic credentials information
Recommended Practice
The provider credentials are stored in secrets in the namespace provided. As such, it is highly recommended you create a specific namespace for that and keep the access for it restricted.
Figure 11.16 – AWS access and secret keys
Figure 11.17 – Proxy configuration
Figure 11.18 – Getting a pull secret
Figure 11.19 – Inputting the pull secret and SSH keys
Note
You can use the following command in a Linux workstation to generate a new SSH key if needed:
ssh-keygen -t ed25519 -N '' -f new-ssh-key
Figure 11.20 – Credential added
Now, let’s go ahead and deploy a new cluster using this credential. Follow this process to deploy the cluster using ACM:
Figure 11.21 – Creating a cluster
Figure 11.22 – Selecting the installation type
Figure 11.23 – Filling out the cluster details
Figure 11.24 – Node pools
Figure 11.25 – Network configurations
Figure 11.26 – Proxy configuration
Figure 11.27 – Ansible automation hooks
Figure 11.28 – Reviewing a cluster
Figure 11.29 – Cluster overview
Figure 11.30 – Adding new labels
Figure 11.31 – Adding a label in a cluster
As you can see, the OpenShift cluster deployment process is straightforward! In the next section, you will see how ACM can also help you to deploy an application into multiple clusters using its embedded deployment mechanism or also using it integrated with OpenShift GitOps (Argo CD).
One of the greatest benefits of ACM is providing a single and simple way to view applications that are deployed among different clusters. You can also deploy an application into multiple clusters using two different approaches:
We will walk through the process of each approach in this section.
This model is embedded in ACM and doesn’t depend on anything other than ACM itself. In the Application Subscription model, you will define an Application object that subscribes (Subscription) to one or more Kubernetes resources (Channel) that contain the manifests that describe how the application is deployed. The application will be deployed in the clusters defined in the placement rules.
The following is a diagram that explains how this model works:
Figure 11.32 – ACM Application Subscription model
Let’s get back to the sample application we used in the previous chapter and create the ACM objects to check what the application deployment model looks like.
Define the source repositories used to deploy an application. It can be a Git repository, Helm release, or object storage repository. We are going to use the following YAML manifest to point out to our Git repository:
apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: cloud-api-github namespace: clouds-api-dev spec: pathname: https://github.com/PacktPublishing/OpenShift-Multi-Cluster-Management-Handbook.git #[1] type: Git
#[1] highlights the URL to the Git repository that contains the application deployment manifests.
After the Channel object, we need to create the PlacementRule object, which will be used with the application deployment.
Placement rules define the target clusters where the application will be deployed. They are also used with policies. Remember that we added the env=dev label to the cluster we provisioned earlier. We are going to use it now to define our PlacementRule object:
apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: cloud-api-placement namespace: clouds-api-dev labels: app: cloud-api spec: clusterSelector: matchLabels: env: dev #[1]
#[1] highlights the cluster selector based on labels. It will instruct ACM to deploy the application in all clusters that have the env=dev label.
We are now ready to create the Subscription object.
Subscriptions are used to subscribe clusters to a source repository and also define where the application will be deployed. They work like a glue between the deployment manifests (Channel) and the target clusters (PlacementRule). The following shows what our Subscription object looks like:
apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: cloud-api-subscription namespace: clouds-api-dev annotations: apps.open-cluster-management.io/git-path: sample-go-app/clouds-api/k8s/ #[1] labels: app: cloud-api #[2] spec: channel: clouds-api-dev/cloud-api-github #[3] placement: placementRef: #[4] name: cloud-api-placement kind: PlacementRule
In the preceding code, we have highlighted some parts with numbers. Let’s take a look:
Finally, we can now create the ACM Application object.
Applications are objects used to describe a group of ACM resources that are needed to deploy an application. The following is the Application object of our sample:
apiVersion: app.k8s.io/v1beta1 kind: Application metadata: name: cloud-api namespace: clouds-api-dev spec: componentKinds: - group: apps.open-cluster-management.io kind: Subscription descriptor: {} selector: matchExpressions: #[1] - key: app operator: In values: - cloud-api
#[1] highlights subscriptions used by this application. In this case, subscriptions that have the app=cloud-api label will be used.
Now that we understand the objects involved in application deployment, let’s create them on ACM.
Deploying the objects is as simple as running an oc apply command from the hub cluster. Run the following commands from the hub cluster to deploy the application:
$ git clone https://github.com/PacktPublishing/OpenShift-Multi-Cluster-Management-Handbook.git $ cd OpenShift-Multi-Cluster-Management-Handbook/ $ oc apply -k chapter11/acm-model namespace/clouds-api-dev created application.app.k8s.io/cloud-api created channel.apps.open-cluster-management.io/cloud-api-github created placementrule.apps.open-cluster-management.io/cloud-api-placement created subscription.apps.open-cluster-management.io/cloud-api-subscription created
You can check the application status by running the following command:
$ oc get application -n clouds-api-dev NAME TYPE VERSION OWNER READY AGE cloud-api 5m48s
You can alternatively deploy the application using the ACM web console. To do so, perform the following process:
Figure 11.33 – Deploying an application using ACM
Figure 11.34 – Filling out the application data
Figure 11.35 – Placement configuration details
Figure 11.36 – Application topology
Now that you know how to deploy an application using the embedded ACM subscription, let’s see how we would do the same using OpenShift GitOps (Argo CD).
As already mentioned, you can alternatively deploy applications using ACM integrated with OpenShift GitOps (Argo CD) through an object called an ApplicationSet. In the Configuring Argo CD for multi-cluster section of Chapter 10, OpenShift GitOps – Argo CD, we saw how to use the argocd command-line option to add managed clusters to Argo CD. You don’t need to do that when you use ACM, as ACM manages the external clusters and can add them to Argo CD for you. Instead, with ACM you will need to define the following objects in the hub cluster, in order to instruct ACM to configure Argo CD and add the managed clusters for you:
We have sample YAML for all the previously listed objects in the chapter11/argocd folder of our GitHub repository. Go ahead and use the following commands to apply those objects in your hub cluster:
$ git clone https://github.com/PacktPublishing/OpenShift-Multi-Cluster-Management-Handbook.git $ cd OpenShift-Multi-Cluster-Management-Handbook/ $ oc apply -k chapter11/argocd gitopscluster.apps.open-cluster-management.io/argo-acm-clusters created managedclusterset.cluster.open-cluster-management.io/all-clusters created managedclustersetbinding.cluster.open-cluster-management.io/all-clusters created placement.cluster.open-cluster-management.io/all-clusters created
Now, access your ACM and go to Clusters | Cluster sets (tab) | all-clusters | Managed clusters (tab), and then click on the Manage resource assignments button. On this page, select all your clusters and click on the Review button and then Save.
Figure 11.37 – Adding clusters to a cluster set
Finally, we can go ahead and create an ApplicationSet that uses a Placement object to deploy the application in all clusters that have the env=dev label:
$ oc apply -f chapter11/argocd/applicationset.yaml applicationset.argoproj.io/cloud-api created placement.cluster.open-cluster-management.io/cloud-api-placement created
After a couple of minutes, you should see the application deployed from the application Overview/Topology view.
Figure 11.38 – ApplicationSet topology
The Topology view allows you to see your application deployed into multiple clusters from a single pane. This feature is really helpful for applications that are deployed over several clusters, as you can easily see how the application is behaving in all the clusters from a single and simple view.
This concludes our overview of the application life cycle management feature of Red Hat ACM. In this section, you have seen how ACM can help you deploy applications into multiple managed clusters by using either the Application Subscription model or OpenShift GitOps (Argo CD). Next, you are going to see how to use policies on ACM to keep your clusters compliant according to your organization’s business and security needs.
We have been discussing the challenges that large enterprises face in keeping different environments consistent a lot in this book. The ACM governance feature can play a crucial role in your strategy to maintain secure and consistent environments, no matter where they are running. The ACM governance feature allows you to define policies for a set of clusters and inform or enforce when clusters become non-compliant.
To define policies in ACM, you need to create three objects:
You can see an example of a policy to check etcd encryption in all managed clusters on our GitHub. The following diagram shows what the interaction between the ACM policy objects looks like:
Figure 11.39 – ACM policy model
Run the following command to create the policy:
$ git clone https://github.com/PacktPublishing/OpenShift-Multi-Cluster-Management-Handbook.git $ cd OpenShift-Multi-Cluster-Management-Handbook/ $ oc apply -k chapter11/governance namespace/acm-policies-sample created placementrule.apps.open-cluster-management.io/placement-policy-etcdencryption created placementbinding.policy.open-cluster-management.io/binding-policy-etcdencryption created policy.policy.open-cluster-management.io/policy-etcdencryption created
Now, access the Governance feature on the ACM web console to check the policy we just put in place.
Figure 11.40 – ACM governance console
Click on Policies and access policy-etcdencryption to see the details.
Figure 11.41 – ACM governance – violation details
In the Further reading section of this chapter, you will find a link to a repository that contains several reusable policies that you can use as is or as samples to create your own policies.
As you have seen, the ACM governance feature is simple to understand and use. Now think about the policies that you would like to have monitored or enforced in your clusters and start deploying your own policies!
Multicluster observability is an ACM feature that is intended to be a central hub for metrics, alerting, and monitoring systems for all clusters, whether hub clusters or managed clusters.
As this tool handles a large amount of data, it is recommended to provide fast disks as its storage backend. Red Hat has tested and fully supports the solution if adopted in conjunction with Red Hat OpenShift Data Foundation.
Although Red Hat recommends doing so, the prerequisite is a storage solution that provides object/S3-type storage, such as those commonly found in most cloud providers (such as Amazon S3).
Since observability is a feature of an ACM operator, there aren’t many prerequisites. The following are the requirements:
Important Note
It is important to configure encryption when you have sensitive data persisted. The Thanos documentation has a definition of supported object stores. Check the link in the Further reading section at the end of this chapter.
Since observability runs on top of ACM, its creation depends on a Custom Resource (CR) that will trigger the creation of the Multicluster Observability instance.
The following diagram demonstrates a high-level architecture of the objects involved in the observability solution. It serves as a reference for which objects are created when enabling the observability service:
Figure 11.42 – Creating MultiClusterHub
Follow these instructions to enable multicluster observability:
$ LOCATION=eastus #[1]
$ RESOURCEGROUP=aro-rg #[2]
$ CLUSTER=MyHubCluster #[3]
$ STORAGEBLOB=observsto #[4]
$ az storage account create --name $STORAGEBLOB --resource-group $RESOURCEGROUP --location $LOCATION --sku Standard_ZRS --kind StorageV2 #[5]
$ az ad signed-in-user show --query objectID –o tsv | az role assignment create --role "Storage Blob Data Contributor" --assignee @- --scope "subscriptions/11111111-2222-a1a1-d3d3-12mn12mn12mn/resourceGroups/$RESOURCEGROUP/providers/Microsoft.Storage/storageAccountes/$STORAGEBLOB" #[6]
$ az storage container create --account-name $STORAGEBLOB --name container-observ --auth-mode login #[7]
"created": true
$ az storage account show-connection-string –name $STORAGEBLOB
"connectionString": "DefaultEndpointsProtocol=https;EndpointSuffix=conre.windows.net;AccountName=observsto;AccountKey=sfsfoefoosdfojevntoaa/dsafojosjfsodfsafdsaf==" #[8]
Let’s take a look at what the highlighted numbers mean:
$ oc create namespace open-cluster-management-observability
$DOCKER_CONFIG_JSON='oc extract secret/pull-secret –n openshift-config --to=.' #[1]
.dockerconfigjson
$ oc create secret generic multiclusterhub-operator-pull-secret –n open-cluster-management-observability --from-literal=.dockerconfigjson="DOCKER_CONFIG_JSON" --type= kubernetes.io/dockerconfigjson #[2]
(.. omitted ..)
thanos.yaml: |
type: AZURE
config:
storage_account: observsto #[3]
storage_account_key: sfsfoefoosdfojevntoaa/dsafojosjfsodfsafdsaf== #[3]
container: container-observ #[3]
endpoint: blob.core.windows.net
max_retries: 0
(.. omitted ..)
Let’s take a look at what the highlighted numbers mean:
Configuration file available at https://github.com/PacktPublishing/OpenShift-Multi-Cluster-Management-Handbook/blob/main/chapter11/acm-observability/thanos-object-storage.yaml.
Figure 11.43 – MultiClusterObservability creation
Figure 11.44 – MultiClusterObservability
Figure 11.45 – Configuring an instance for MultiClusterObservability
Figure 11.46 – Configuring an instance for MultiClusterObservability
#[1]: Notice that Observability option is now available in Grafana link.
Figure 11.47 – MultiClusterObservability dashboard view sample
Now, you can count on this amazing ACM feature to help you and your organization monitor all your Kubernetes managed clusters from a central pane, independent of the infrastructure or cloud provider they are running over. In the next subsection, we will show you an option that gives you even more control over your cluster.
As we have seen so far, observability can be a great ally for monitoring all your clusters from a central view, but now we will go even further and show you the icing on the cake, that will be one thing more to help you to manage your clusters.
As shown in Figure 11.42, AlertManager is a resource that is part of the observability architecture. We will show a sample now that you can use to enable this feature and get alerts from all managed clusters.
AlertManager is a tool that can send alerts to a set of other systems, such as email, PagerDuty, Opsgenie, WeChat, Telegram, Slack, and also your custom webhooks. For this example, we will use Slack, a short-messaging tool, as a receiver for all of our alerts.
First, you will need the Slack app to set up alerts, and then point to https://api.slack.com/messaging/webhooks and follow the instructions to create and configure a channel. When you finish configuring the Slack app, you will get a webhook endpoint similar to the following: https://hooks.slack.com/services/T03ECLDORAS04/B03DVP1Q91D/R4Oabcioek. Save the webhook address in a safe place as it will be used in the next steps.
To configure AlertManager, you will need to create a new file named alertmanager.yaml. This file will have the webhook that you saved previously. The complete YAML files are available in our GitHub repository for your reference and use (github repository at https://github.com/PacktPublishing/OpenShift-Multi-Cluster-Management-Handbook/blob/main/chapter11/acm-observability/alertmanager.yaml):
global: slack_api_url: 'https://hooks.slack.com/services/T03ECLDORAS04/B03DVP1Q91D/R4Oabcioek' #[1] resolve_timeout: 1m route: receiver: 'slack-notifications' (.. omitted ..) routes: - receiver: slack-notifications #[2] match: severity: critical|warning #[3] receivers: - name: 'slack-notifications' slack_configs: - channel: '#alertmanager-service' #[4] send_resolved: true icon_url: https://avatars3.githubusercontent.com/u/3380462 title: |- [{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ .CommonLabels.alertname }} for {{ .CommonLabels.job }} (.. omitted ..)
In the preceding code, we have highlighted some parts with numbers. Let’s take a look:
The next step is to apply the new alertmanager.yaml file to the ACM observability namespace:
$ oc -n open-cluster-management-observability create secret generic alertmanager-config --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n open-cluster-management-observability replace secret --filename=-
The alertmanager.yaml file must be in the same execution directory. Wait until the new AlertManager pods are created and you will receive new [Firing] or [Resolved] alerts on the configured channel. See an example in the following screenshot:
Figure 11.48 – AlertManager multicluster alerts
Here we go; we have our AlertManager set and sending alerts to a Slack channel! Therefore, in this section, you have seen the observability feature, from the installation to configuration and use. This should help you in your multi-cluster journey to monitor all your clusters, no matter which provider they are running in.
In this chapter, you have been introduced to Red Hat ACM and have seen an overview of its features and how it can help you manage several clusters. Now you understand that Red Hat ACM provides features to manage multiple clusters, keep them compliant with the policies you define for them, deploy workloads into many of them at once, and also monitor all of them from a central pane.
We also walked through the ACM installation process, provisioned a new cluster on AWS using ACM, saw how to deploy an application by using either the embedded ACM Application Subscription model or integrated with Argo CD, had a brief overview of the ACM governance feature, and, finally, enabled the observability feature to monitor multiple clusters and aggregate metrics on ACM.
In today’s world, handling multiple clusters over multiple providers, either on-premises or in the cloud, is a reality in most companies; therefore, a multi-cluster management tool is a must-have. Red Hat ACM can provide you with the features you need to manage all clusters from a centralized place. We encourage you to explore and start using ACM now to reap all the benefits of this great tool.
Continue to the next chapter to learn how Red Hat Advanced Cluster Security can help you to keep your Kubernetes and OpenShift clusters secure.
Looking for more information? Check out the following references to get more information about Red Hat ACM: