RPM-based patching with Pulp

In the previous section of this chapter, we created two repositories for our CentOS 7 build—one for the operating system release and another to contain the updates.

The process of updating a CentOS 7 build from these repositories is, at a high level, done as follows:

  1. Move aside any existing repository definitions in /etc/yum.repos.d to ensure we only load repositories from the Pulp server.
  2. Deploy the appropriate configuration using Ansible.
  3. Employ Ansible to pull the updates (or any required packages) from the Pulp server using the new configuration.

Before we proceed with creating the appropriate playbooks, let's take a look at what the repository definition file would look like on our CentOS 7 machine if we created it by hand. Ideally, we want it to look something like this:

[centos-os]
name=CentOS-os
baseurl=https://pulp.example.com/pulp/repos/centos76-os
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
sslverify=0

[centos-updates]
name=CentOS-updates
baseurl=https://pulp.example.com/pulp/repos/centos7-07aug19
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
sslverify=0

There's nothing particularly unique about this configuration—we are using the relative-url we created earlier with our repository using pulp-admin. We are using GPG checking of package integrity, along with the CentOS 7 RPM GPG key, which we know will already be installed on our CentOS 7 machine. The only tweak we've had to make to this otherwise standard configuration is to turn off SSL verification since our demo Pulp server features a self-signed certificate. Of course, if we are using an enterprise certificate authority and the CA certificates are installed on each machine, then this problem goes away.

Given the power of Ansible, we can be a bit clever about how we do this. There's no point creating and deploying static configuration files when we know that, at some point, we're going to update the repository—meaning, at the very least, that baseurl might change.

Let's start off by creating a role called pulpconfig to deploy the correct configurationtasks/main.yml should look like this:

---
- name: Create a directory to back up any existing REPO configuration
file:
path: /etc/yum.repos.d/originalconfig
state: directory

- name: Move aside any existing REPO configuration
shell: mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/originalconfig

- name: Copy across and populate Pulp templated config
template:
src: templates/centos-pulp.repo.j2
dest: /etc/yum.repos.d/centos-pulp.repo
owner: root
group: wheel

- name: Clean out yum database
shell: "yum clean all"

The accompanying templates/centos-pulp.repo.j2 template should look like this:

[centos-os]
name=CentOS-os
baseurl=https://pulp.example.com/pulp/repos/{{ centos_os_relurl }}
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
sslverify=0

[centos-updates]
name=CentOS-updates
baseurl=https://pulp.example.com/pulp/repos/{{ centos_updates_relurl }}
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
sslverify=0

Notice the variable substitutions at the end of each of the baseurl lines—these allow us to keep the same template (which should be common for most purposes) but change the repository URL over time to adapt to updates.

Next, we will define a second role specifically for updating the kernel—this will be very simple for our example and tasks/main.yml will contain the following:

---
- name: Update the kernel
yum:
name: kernel
state: latest

Finally, we will define site.yml at the top level of the playbook structure to pull all of this together. We could, as we discussed previously, define the variables for the relative URLs in a whole host of places, but for the sake of this example, we will put them in the site.yml playbook itself:

---
- name: Install Pulp repos and update kernel
hosts: all
become: yes
vars:
centos_os_relurl: "centos76-os"
centos_updates_relurl: "centos7-07aug19"

roles:
- pulpconfig
- updatekernel

Now, if we run this in the usual manner, we will see output similar to the following:

So far, so good—the changed statuses from the preceding play tell us that the new configuration was applied successfully.

Those with a keen eye will have observed the warning on the Clean out yum database tasks—Ansible detects when a raw shell command is being used that has overlapping functionality with a module and recommends that you use the module instead for reasons of repeatability and idempotency, as we discussed earlier. However, as we want to ensure all traces of any earlier yum databases are removed (which can present problems), I have adopted a brute force method here to clean up the old databases.

Now, as I'm sure you will have spotted, the great thing about this approach is that if, say, we want to test our 08aug19 repository snapshot that we created in the previous section, all we have to do is modify the vars: block of site.yml so that it looks like this:

  vars:
centos_os_relurl: "centos76-os"
centos_updates_relurl: "centos7-08aug19"

Hence, we can reuse the same playbook, roles, and templates in a variety of scenarios simply by changing one or two variable values. In an environment such as AWX, these variables could even be overridden using the GUI, making the whole process even easier.

In this way, combining Ansible with Pulp lends itself to a really stable enterprise framework for managing and distributing (and even testing) updates. However, before we look at this process on Ubuntu, a word on rollbacks. In the previous section, we hypothesized an example where our 08aug19 snapshot failed testing and so had to be deleted. As far as CentOS 7 servers are concerned, rollbacks are not as straightforward as simply installing the earlier repository definitions and performing an update since the update will detect newer packages that have been installed and take no action. 

The Pulp repository does, of course, provide a stable base to roll back to—however, rollbacks are generally quite a manual process as you must identify the transaction ID in the yum database that you want to roll back to and validate the actions to be performed and then roll back to it. This, of course, can be automated, provided you have a reliable way of retrieving the transaction ID.

The following screenshot shows a simple example of identifying the transaction ID for the kernel update we just automated and establishing the details of the change that was performed:

Then, we can (if we so choose) roll back the transaction using the command shown in the following screenshot:

Using this simple process and the playbooks offered here as a guide, it should be possible to establish a solid, stable, automated update platform for any RPM-based Linux distribution.

In the next section, we will look at the method we can use to perform the same set of tasks, except for DEB-based systems such as Ubuntu.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset