Making scalable static configuration changes

It is vital that the configuration changes that we make are version controlled, repeatable, and reliable—thus, let's consider an approach that achieves this aim. Let's start with a simple example by revisiting our SSH daemon configuration. On most servers, this is likely to be static, as requirements such as restricting remote root logins and disabling password-based logins are likely to apply across an entire estate. Equally, the SSH daemon is normally configured through one central file—/etc/ssh/sshd_config.

On an Ubuntu server, the default configuration is very simple, consisting of just six lines if we remove all the whitespace and comments. Let's make some modifications to this file so that remote root logins are denied, X11Forwarding is disabled, and only key-based logins are allowed, as follows:

ChallengeResponseAuthentication no
UsePAM yes
X11Forwarding no
PrintMotd no
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
PasswordAuthentication no
PermitRootLogin no

We will store this file within our roles/ directory structure and deploy it with the following role tasks:

---
- name: Copy SSHd configuration to target host
copy:
src: files/sshd_config
dest: /etc/ssh/sshd_config
owner: root
group: root
mode: 0644

- name: Restart SSH daemon
service:
name: ssh
state: restarted

Here, we use the Ansible copy module to copy the sshd_config file we have created and stored within the role itself to our target host and ensure it has the ownership and mode that's suitable for the SSH daemon. Finally, we restart the SSH daemon to pick up the changes (note that this service name is valid on Ubuntu Server and may vary on other Linux distributions). Thus, our completed roles directory structure looks like this:

roles/
└── securesshd
├── files
│ └── sshd_config
└── tasks
└── main.yml

Now, we can run this to deploy the configuration to our test host, as follows:

Now, deploying the configuration through this means gives us a number of advantages over the methods we have explored previously, as listed here:

  • The role itself can be committed to a version control system, thus implicitly bringing the configuration file itself (in the files/ directory of the role) under version control.
  • Our role tasks are very simple—it is very easy for someone else to pick up this code and understand what it does, without the need to decipher the regular expressions.
  • It doesn't matter what happens to our target machine configuration, especially in terms of whitespace or configuration format. The pitfalls discussed at the end of the previous section are avoided completely because we simply overwrite the file on deployment.
  • All machines have an identical configuration, not just in terms of directives, but in terms of order and formatting, thus ensuring it is easy to audit configuration across an enterprise. 

Thus, this role represents a big step forward in terms of enterprise-scale configuration management. However, let's see what happens if we run the role against the same host a second time. The resulting output can be seen in the following screenshot:

From the preceding screenshot, we can see that Ansible has determined that the SSH configuration file is unmodified from the last run, and hence, the ok status is returned. However, in spite of this, the changed status of the Restart SSH daemon task indicates that the SSH daemon has been restarted, even though no configuration change was made. Restarting system services is normally disruptive, and so it should be avoided unless absolutely necessary. In this case, we would not wish to restart the SSH daemon unless a configuration change is made.

The recommended way to handle this is with a handler. A handler is an Ansible construct that is much like a task, except that it only gets called when a change is made. Also, when multiple changes are made to a configuration, the handler can be notified multiple times (once for each applicable change), and yet the Ansible engine batches up all handler calls and runs the handler once, only after the tasks complete. This ensures that when it is used to restart a service, such as in this example, the service is only restarted once, and only then when a change is made. Let's test this now, as follows:

  1. First of all, remove the service restart task from the role and add a notify clause to notify the handler (we shall create this in a minute). The resulting role tasks should look like this:
---
- name: Copy SSHd configuration to target host
copy:
src: files/sshd_config
dest: /etc/ssh/sshd_config
owner: root
group: root
mode: 0644
notify:
- Restart SSH daemon
  1. Now, we need to create a handlers/ directory in the role and add our previously removed handler code to it so that it looks like this:
---
- name: Restart SSH daemon
service:
name: ssh
state: restarted
  1. The resulting roles directory structure should now look like this:
roles/
└── securesshd
├── files
│ └── sshd_config
├── handlers
│ └── main.yml
└── tasks
└── main.yml
  1. Now, when we run the playbook twice on the same server (having initially reverted the SSH configuration to the original one), we see that the SSH daemon is only restarted in the instance where we have actually changed the configuration, as shown in the following screenshot:

To further demonstrate handlers before we move on, let's consider this enhancement to the role tasks:

---
- name: Copy SSHd configuration to target host
copy:
src: files/sshd_config
dest: /etc/ssh/sshd_config
owner: root
group: root
mode: 0644
notify:
- Restart SSH daemon

- name: Perform an additional modification
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^# Configured by Ansible'
line: '# Configured by Ansible on {{ inventory_hostname }}'
insertbefore: BOF
state: present
notify:
- Restart SSH daemon

Here, we deploy our configuration file and perform an additional modification. We are putting a comment into the head of the file, which includes an Ansible variable, with the hostname of the target host.

This will result in two changed statuses on our target host, and yet, if we revert to the default SSH daemon configuration and then run our new playbook, we see the following:

Pay careful attention to the preceding output and the sequence in which the tasks are run. You will note that the handler is not run in sequence and is actually run once at the end of the play.

Even though our tasks both changed and hence would have notified the handler twice, the handler was only run at the end of the playbook run, minimizing restarts, just as required.

In this manner, we can make changes to static configuration files at large scales, across many hundreds—if not thousands—of machines. In the next section, we will build on this to demonstrate ways of managing configuration where dynamic data is required—for example, configuration parameters that might change on a per-host or per-group basis.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset