Enterprise Kubernetes Operator

Starting from MongoDB 4.0, the MongoDB Enterprise Operator for Kubernetes enables a user to deploy and manage MongoDB clusters directly from the Kubernetes API. This circumvents the need to directly connect to Cloud Manager or Ops Manager, and simplifies the deployment and management of Kubernetes clusters.

Cloud Manager is, in most aspects, the SaaS equivalent of Ops Manager. 

Enterprise Kubernetes Operator can be installed using Helm, the package manager for Kubernetes. First, we have to clone the GitHub repository from MongoDB: https://github.com/mongodb/mongodb-enterprise-kubernetes.git.

After we change the directory to our local copy, we can issue the following command:

helm install helm_chart/ --name mongodb-enterprise

We will then have the local copy installed; the next step is to configure it.

By configuring our local installation, we need to apply a Kubernetes ConfigMap file. The configuration settings that we need to copy from Ops Manager or Cloud Manager are as follows:

  • Base URL: The URL of your Ops Manager or Cloud Manager. For Cloud Manager, this will be http://cloud.mongodb.com; for Ops Manager, this should be similar to http://<MY_SERVER_NAME>:8080/.
  • Project ID: The ID of an Ops Manager project that the Enterprise Kubernetes Operator will deploy into. This should be created within the Ops Manager or Cloud Manager, and is a unique ID to organize a MongoDB cluster and provide a security boundary for the project. It should be a 24-digit hexadecimal string.
  • User: An existing Ops Manager username. This is the email of a user in Ops Manager that we want the Enterprise Kubernetes Operator to use when connecting to Ops Manager.
  • Public API key: This is used by the Enterprise Kubernetes Operator to connect to the Ops Manager REST API endpoint.

This is created by clicking on the username on the Ops Manager console and selecting Account. On the next screen, we can click on Public API Access, and then click on the Generate key button and provide a description. The next screen will display the public API key that we need.

This is the only chance that we will ever have to view this API key, so we need to write it down, otherwise, we will need to regenerate a new key.

Once we have these values, we can create the Kubernetes ConfigMap file with any name we want, as long as it's a .yaml file. In our case, we will name it mongodb-project.yaml.

Its structure will be as follows:

apiVersion: v1
kind: ConfigMap
metadata:
name:<<any sample name we choose(1)>>
namespace: mongodb
data:
projectId:<<Project ID from above>>
baseUrl: <<BaseURI from above>>

Then we can apply this file to Kubernetes using the following command:

kubectl apply -f mongodb-project.yaml

The last step we need to take is to create the Kubernetes secret. This can be done using the following command:

kubectl -n mongodb create secret generic <<any sample name for credentials we choos>> --from-literal="user=<<User as above>>" --from-literal="publicApiKey=<<our public api key as above>>"
We need to note down the credentials name as we will need it in the subsequent steps.

Now we are ready to deploy our replica set using Kubernetes! We can create a replica-set.yaml file with the following structure:

apiVersion: mongodb.com/v1
kind: MongoDbReplicaSet
metadata:
name: <<any replica set name we choose>>
namespace: mongodb
spec:
members: 3
version: 3.6.5
persistent: false
project: <<the name value (1) that we chose in metadata.name of ConfigMap file above>>
credentials: <<the name of credentials secret that we chose above>>

We apply the new configuration using kubectl apply:

kubectl apply -f replica-set.yaml

We will be able to see our new replica set in Ops Manager.

To troubleshoot and identify issues in MongoDB using Kubernetes we can use
kubectl logs to inspect logs, and kubectl exec to shell into one of the containers that is running MongoDB.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset