Previous chapters have gone into detail about using roles and modules to interact with parts of the Ansible Automation Platform (AAP). This chapter will go into detail about using principles of Configuration as Code (CaC) and Continuous Integration and Continuous Delivery (CI/CD) to maintain configuration and interact with services in the AAP. Some of this has been covered in other chapters, such as triggering a project update, a configuration change, when a pull request has been completed, or doing a regular backup of the installation. The goal of this chapter is to put those ideas together in a more cohesive format with examples of how to integrate those examples into CI/CD.
In this chapter, we’re going to cover the following main topics:
This chapter will have multiple playbooks. All code referenced in this chapter is available at https://github.com/PacktPublishing/Demystifying-Ansible-Automation-Platform/tree/main/ch12. It is assumed that you have Ansible installed in order to run the code provided.
CI/CD pipelines come in many forms. A few examples are GitHub Actions, GitLab Pipelines, Bitbucket Pipelines, and Azure Pipelines. While each has different ways of doing things, they can be summed up as code that is triggered when an event happens from either an event in a repository or an outside request.
Triggers can include merge requests, merges, curl requests, webhooks, or other events. They also can be run on generic or special containers. In many cases, the same container image used for execution environments can be used as a CI/CD runner image.
Refer to the documentation on the Git server that is being used on how to create pipelines specific to that technology. While the actual code implementation will be the same, there can be some key differences. This chapter will go into some detail on GitHub and GitLab implementations.
A webhook is used when an event is set to trigger to reach out to one of the AAP services, usually the Automation controller. This triggers a job and then sends information back. Just like the pipelines, this is unique for each Git service. For webhooks specifically, refer to the webhook documentation for the Automation controller at https://docs.ansible.com/automation-controller/latest/html/userguide/webhooks.html.
The limitation is that webhooks can only trigger workflows and job templates. However, that limitation can be eliminated by using modules and roles in the playbook triggered to do whatever actions are needed. An example of this is if you need to use some prompt user on launch variables that do not match up 1:1 in the payload from the Git server. A job can take the payload and then launch another job using the correct prompts.
Another issue with both the webhooks and CI/CD pipelines is that the Git server must be able to reach port 443 of the Automation controller. In most cases, this is not an issue; however, if you’re using a public repository on https://github.com/, it may not be feasible to reach an internal server.
All of these are tools that can be used to trigger bits of code for any purpose. The following few sections will go over various purposes to use pipelines and webhooks and useful code to use for these actions.
Keeping projects and other objects updated in the AAP can be painful and burdensome without using automation. CI/CD automation is a great way to solve these problems. This section will focus on using tasks to solve this problem.
There are files that have been introduced, scattered throughout the previous chapters, that set the contents of both Automation hub and the Automation controller in configuration files. When referenced, these files can be used by the redhat_cop.controller_configuration roles to manage the Automation controller. These have been collected in the ch12/controller/configs folder.
There are two main reasons to trigger code for maintaining state: to test a pull request, or to update a project or a configuration change after a merge has occurred. While the latter maintains state, the first allows for testing and provides checks prior to a merge, as follows:
//controller/github_ci.yml ---
The first step determines when the code runs. In this case, it runs when a push or merge is made to the main branch as follows:
on: push: branches: - main jobs: Controller-configuration: name: Deploy configuration to controller runs-on: ubuntu-latest steps:
A job is defined, what container it runs on, and then what actions are taken. While this is an excerpt, steps to install Python and Galaxy requirements were done beforehand. This action runs the playbook with extra variables as inputs, as follows:
- name: "Perform playbook update" run: ansible-playbook configure_controller.yml -e controller_hostname=https://controller.node -e controller_password=${{ secrets.controller_password }}
The same can be done on GitLab using the following code:
//controller/.gitlab-ci.yml --- update_on_mr: stage: test only: - merge_requests script: - ansible-playbook configure_controller.yaml --extra-vars "controller_branch=$CI_COMMIT_BRANCH"
The key concepts are setting up the environment to run the tasks on, when to trigger a task, and finally the task itself. The environment can be set up beforehand by building purpose-built containers, or through tasks that execute every time. The when is determined by events and triggers, unique to the Git system, and finally, the tasks are generally the playbooks to run.
Another playbook to run would be one to update a project. This is key as when a repository merge is completed, that change can then be pushed to the Automation controller as follows:
//controller/project_update.yml --- - name: Update Project ansible.controller.project_update: name: Network Playbooks organization: Default wait: True timeout: 600 interval: 10
This can also be applied to push updates for collections and execution environments to the Automation hub when changes and releases occur in their respective Git repositories. An example of this is to publish a built collection to an automation hub as follows:
//hub/publish_collection.yml --- - name: Publish Collection redhat_cop.ah_configuration.ah_collection: namespace: custom_collection_space name: custom_collection path: custom_collection_custom_collection-1.0.0tar.gz auto_approve: false
While these are good examples of what can be done to keep configurations up to date, it is also possible to use the CI/CD jobs to do more than just keep configurations up to date. The following section will look at using pipelines to launch jobs.
Another use case for CI/CD is for launching jobs. Examples of this are scheduled jobs and integration tests. It is good practice to run workflows and jobs with test input and check the results. This can help detect changes that have happened in either the code or the environment.
The playbooks used in this section can be used as jobs in the Automation controller or in their respective Git service pipeline. The idea is to use the GitLab/GitHub workflows or a controller job in order to initiate them.
The playbook is a demonstration of what can be done.
The playbook takes several inputs as follows:
In addition, the modules used have several common inputs as follows:
The inputs and variables are used in the workflows/launch_workflows.yaml playbook that is built to launch and interact with the workflow created in Chapter 10, Creating Job Templates and Workflows.
The first task takes the workflow name and extra variables to launch the workflow. It should return data on the running job. The wait should be false, otherwise, it will wait until the workflow finishes, as follows:
- name: "Launch Workflow {{ workflow_name }}" ansible.controller.workflow_launch: workflow_template: "{{ workflow_name }}" extra_vars: "{{ workflow_extra_vars_dict }}" wait: false register: workflow_data
The second task waits for a specific node to finish to get results from that node. ignore_errors is important if the job node is designed to fail. This is because the task will report failed if the job is failed, halting the playbook.
- name: Wait for a workflow node to finish ansible.controller.workflow_node_wait: workflow_job_id: "{{ workflow_data.id }}" name: "{{ workflow_node_to_check }}" timeout: 90 ignore_errors: true
This ignore errors is important to use and should be used only if the job is designed to fail. Refer to Chapter 11, Creating Advanced Workflows and Jobs, to read about where you would use jobs designed to fail.
The third task will go and approve the approval node as follows:
- name: Wait for approval node to activate and approve ansible.controller.workflow_approval: workflow_job_id: "{{ workflow_data.id }}" name: "{{ approval_template_name }}" interval: 10 timeout: 200 action: approve
The final task waits for the workflow to finish, as follows:
- name: Wait for Workflow to finish ansible.controller.job_wait: job_id: "{{ workflow_data.id }}" job_type: "workflow_jobs" interval: 30 timeout: 1000
This is a framework for a job that launches and monitors a workflow. This can be expanded to test for expected outputs or results using the assert module. A task that can check specific output against expected output can be formed as follows:
- name: assert output matches expected assert: that: - output_from_workflow == expected_ output_from_workflow
These modules in the preceding playbook were made for interacting with the Automation controller to control jobs. They are tools for designing custom integration tests depending on what the job template or workflow needs. The following section will cover another tool to use, which is ad hoc commands to run a specific module against a set of hosts.
On most occasions, there is a call to use an actual playbook, but when things start getting to playbooks within playbooks, or maybe information is needed on another host that is not in the inventory, it’s possible to get creative. We have never actually found a reason to use the ad hoc modules and roles; however, it is good to know that they are there. Additionally, if there ever was a reason to use them, it would be in a CI/CD job outside of the Automation controller.
The modules have several important inputs as follows:
There are other inputs that the module can take, which can be found in the module documentation, but these are the important ones. The playbook is an example of using the ad hoc modules to start the command task, wait for it to finish, and cancel an in-progress ad hoc job with their corresponding modules:
//ad_hoc/launch_ad_hoc.yaml
The first step is to launch the shell command as follows:
- name: Launch an ad hoc command ansible.controller.ad_hoc_command: module_name: shell module_args: ls -a inventory: Demo Inventory credential: Demo Credential wait: false register: command
The second step is to wait for it to complete as follows:
- name: Wait for ad hoc command max 120s ansible.controller.ad_hoc_command_wait: command_id: "{{ command.id }}" timeout: 120
The final module allows the command job to be canceled as follows:
- name: Wait for ad hoc command max 120s ansible.controller.ad_hoc_command_cancel: command_id: "{{ command.id }}" timeout: 120
The modules return data about the job, just like the other job launch modules. A more useful use case is to use CI/CD to run regular backups and a process to restore a backup for the AAP installation, which is covered in the following section.
Backup and restore commands should not be run as jobs or templates on the Automation controller they are being used against. This makes them ideal candidates for CI/CD playbooks. These playbooks use the same configuration files from Chapter 2, Installing Ansible Automation Platform, and uses them to backup and restore an AAP installation. For demonstration purposes, these files are available in this chapter’s repository.
The playbook takes several inputs as follows:
The playbook will load the variable file from the installation that was created in Chapter 2, Installing Ansible Automation Platform, and then run through the roles to download, prepare, and back up the installation, as follows:
//backup_restore/aap_backup.yaml --- vars_files: - inventory_vars/variables.yml roles: - redhat_cop.aap_utilities.aap_setup_download - redhat_cop.aap_utilities.aap_setup_prepare - redhat_cop.aap_utilities.aap_backup
It is recommended to copy the backup from the server the backup is located on to another location. This can be done with the following task:
- name: Copy Backup to another directory copy: src: "{{aap_setup_working_dir}}/backup/automation-platform-backup-latest.tar.gz" dest: "automation-platform-backup-{{ ansible_date_time.date }}.tar.gz"
For the restore, the variable to add to the mix is aap_restore_file to specify the location of the backup. The only difference is instead of running the aap_backup role, use the aap_restore role. The playbook can be found at backup_restore/aap_backup.yaml.
It is recommended to spin up temporary servers and restore the backup files periodically. A backup is never a backup unless it’s been tested.
This chapter has gone over several tools and jobs to use with webhooks or CI/CD pipelines. It has covered CI/CD pipelines, workflow tools, ad hoc commands, and backup and restore options.
The following chapter will go into integrating your Automation controller with other logging and monitoring services.
This chapter discussed various CI/CD pipelines. The documentation for some of the most popular pipelines can be found at the following links: