Automating OpenStack installations using Ansible – Playbook configuration

Now that the hosts have been configured correctly and all of the network interfaces are set up correctly, we can begin editing the configuration files that will be used when the Ansible Playbooks are run. In this recipe, we use Git to check the OpenStack Ansible Deployment (OSAD) Playbooks, the same ones originally developed by Rackspace and used by them to deploy OpenStack for its customers. We will be using the latest release at the time of writing: Git Tag 11.0.3 that refers to the Kilo release (Kilo refers to the letter K, which is the 11th letter in the alphabet).

The environment we will configure is shown in the following diagram:

Automating OpenStack installations using Ansible – Playbook configuration

Getting ready

It is important that the previous recipe, Automating OpenStack installations using Ansible – host configuration, has been followed and that all the configured networks are working as expected.

The environment will consist of three Controller nodes, one Storage node, one HA Proxy node, and two Compute nodes. Identify which of these will be the HA Proxy server and log into it as the root user. Out of convenience, this server will also be used to install OpenStack.

How to do it...

In this recipe, we are configuring the YAML configuration files that are used by the Playbooks. There are three files that we will be configuring: openstack_user_config.yml, user_variables.yml, and user_secrets.yml. The three of these files combined describe our entire installation, from what server in our datacenter is running which OpenStack function to passwords and features to enable in OpenStack.

  1. We first need to get the Ansible Playbooks from GitHub and place them into /opt/os-ansible-deployment. This is achieved with the following command:
    cd /opt
    git clone -b 11.0.3 
         https://github.com/stackforge/os-ansible-deployment.git
    
  2. We then proceed to configure the installation by first copying the example and empty configuration files from the cloned GitHub repository to /etc/openstack_deploy, as shown here:
    cp -R /opt/os-ansible/etc/openstack_deploy /etc
    
  3. The first file we configure is a large file located at /etc/openstack_deploy/openstack_user_config.yml, which describes our physical environment. The information here is very specific to our installation describing network ranges, interfaces used, and the nodes that are running each service. The first section refers to the CIDRs used in our environment:
    ---
    cidr_networks:
      management: 172.16.0.0/16
      tunnel: 172.29.240.0/22
    
    used_ips:
      - 172.16.0.101,172.16.0.107
      - 172.29.240.101,172.29.240.107
  4. In the same file, we have the global_overrides section. The global_overrides section describes our Load Balance VIP addresses, our network bridges, and details of the Neutron networking. This a longer section that has the following in our environment. Note that we are pre-empting how things will be installed. Here, we set the IP addresses needed for a Load Balancer that does not yet exist in our environment. We will be using HA Proxy (installed in the next recipe) that will use these addresses:
    global_overrides:
      internal_lb_vip_address: 172.16.0.107
      external_lb_vip_address: 192.168.1.107 
      lb_name: haproxy
      tunnel_bridge: "br-vxlan"
      management_bridge: "br-mgmt"
      provider_networks:
        - network:
            group_binds:           
              - all_containers
              - hosts 
            type: "raw" 
            container_bridge: "br-mgmt" 
            container_interface: "eth1" 
            container_type: "veth" 
            ip_from_q: "management" 
            is_container_address: true 
            is_ssh_address: true 
        - network: 
            group_binds: 
              - neutron_linuxbridge_agent 
            container_bridge: "br-vxlan" 
            container_type: "veth" 
            container_interface: "eth10" 
            ip_from_q: "tunnel" 
            type: "vxlan" 
            range: "1:1000" 
            net_name: "vxlan" 
        - network: 
            group_binds: 
              - neutron_linuxbridge_agent 
            container_bridge: "br-vlan" 
            container_type: "veth" 
            container_interface: "eth11" 
            type: "vlan" 
            range: "1:1" 
            net_name: "vlan" 
        - network: 
            group_binds: 
              - neutron_linuxbridge_agent 
            container_bridge: "br-vlan" 
            container_type: "veth" 
            container_interface: "eth12" 
            host_bind_override: "eth12" 
            type: "flat" 
            net_name: "flat"
  5. After this section, we get to describe what servers make up our OpenStack installation. In the same file, next, add these, details which will refer to our infrastructure hosts (or the controller nodes). Each section has the three servers listed we allocated as our controller nodes. This section is about the shared services such as MariaDB and RabbitMQ:
    # Shared infrastructure parts 
    shared-infra_hosts: 
      controller-01: 
        ip: 172.16.0.101 
      controller-02: 
        ip: 172.16.0.102 
      controller-03: 
        ip: 172.16.0.103 
  6. This section is where our OpenStack Compute services, such as the Nova API, will get installed:
    # OpenStack infrastructure parts 
    os-infra_hosts: 
      controller-01: 
        ip: 172.16.0.101 
      controller-02: 
        ip: 172.16.0.102 
      controller-03: 
        ip: 172.16.0.103 
  7. The storage-infra section is where the Cinder storage API will be found:
    # OpenStack Storage infrastructure parts 
    storage-infra_hosts: 
      controller-01: 
        ip: 172.16.0.101 
      controller-02: 
        ip: 172.16.0.102 
      controller-03: 
        ip: 172.16.0.103  
  8. This describes where we will find the Keystone API:
    # Keystone Identity infrastructure parts 
    identity_hosts: 
      controller-01: 
        ip: 172.16.0.101 
      controller-02: 
        ip: 172.16.0.102 
      controller-03: 
        ip: 172.16.0.103  
  9. Next, we describe the servers that will be used for our Compute nodes (the hypervisor nodes). In the same file, add these details that refer to our Compute hosts:
    # Compute Hosts 
    compute_hosts: 
      compute-01: 
        ip: 172.16.0.104 
      compute-02: 
        ip: 172.16.0.105  
  10. Next, we configure any Cinder storage nodes. We enter the information about how this is configured (such as the backend type such as NFS with NetApp or LVM) here:
    storage_hosts: 
      storage: 
        ip: 172.16.0.106 
        container_vars: 
          cinder_backends: 
            limit_container_types: cinder_volume 
            lvm: 
              volume_group: cinder-volumes 
              volume_driver: 
    cinder.volume.drivers.lvm.LVMISCSIDriver 
              volume_backend_name: LVM_iSCSI 
  11. As part of the Playbooks, we can install the Neutron services on a number of infrastructure nodes. We tell Ansible to deploy this software at these addresses:
    network_hosts: 
      controller-01: 
        ip: 172.16.0.101 
      controller-02: 
        ip: 172.16.0.102 
      controller-03: 
        ip: 172.16.0.103  
  12. Define the repository hosts that are used for installation of the packages within the environment:
    # User defined Repository Hosts
    repo-infra_hosts: 
      controller-01: 
        ip: 172.16.0.101 
      controller-02: 
        ip: 172.16.0.102 
      controller-03: 
        ip: 172.16.0.103  
  13. Finally, we add in the following section so that when we install HA Proxy (to wrap our cluster behind), the Playbooks know where to install the service:
    haproxy_hosts: 
      haproxy: 
        ip: 172.16.0.107
  14. The next file that we need to edit is the /etc/openstack_deploy/user_variables.yml file. This file is a much smaller file that describes OpenStack configuration options. For example, we specify in here what the backend filesystem is used for Glance, options for Nova, as well as options for Apache (which sits in front of Keystone):
    ## Glance Options
    # Set default_store to "swift" if using Cloud Files 
    # or swift backend or file to use NFS or local filesystem
    glance_default_store: file 
    glance_notification_driver: noop
    
    ## Nova options
    nova_virt_type: kvm 
    nova_cpu_allocation_ratio: 2.0 
    nova_ram_allocation_ratio: 1.0
    
    ## Apache SSL Settings 
    # These do not need to be configured unless you're creating 
    # certificates for 
    # services running behind Apache (currently, Horizon and 
    # Keystone). 
    ssl_protocol: "ALL -SSLv2 -SSLv3" 
    # Cipher suite string from https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ 
    ssl_cipher_suite: "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS"
  15. The last file that we need to configure is the /etc/openstack_deploy/user_secrets.yml file, which holds the passphrases the services use within OpenStack. To configure this securely for our environment, and to provide randomly generated strings, execute the following command:
    cd /opt/os-ansible-deployment
    scripts/pw-token-gen.py --file
      /etc/openstack_deploy/user_secrets.yml
    

Congratulations! We're now ready to use the Ansible Playbooks to install OpenStack.

How it works...

All configuration management and automated system installations require a lot of effort in the first few stages, which reduces a lot of time later on. Installing something as complex as OpenStack is no different.

After we fetched the Playbooks from GitHub, we configured the following files:

  • /etc/openstack_deploy/openstack_user_config.yml: This file describes our physical environment (which includes networking, the hosts that are being used, and what services those hosts will run)
  • /etc/openstack_deploy/user_variables.yml: This file describes the configuration of OpenStack services, such as the CPU contention ratio for KVM
  • /etc/openstack_deploy/user_secrets.yml: This file has the service passphrases, such as the MariaDB root user passphrase for use with MariaDB, and the Nova service passphrase when the service gets created in Keystone

Once these files have been edited to suit the environment, the Playbooks in the next recipe, Automating OpenStack installations using Ansible – running Playbooks, can be executed. Then, we can run through a hands-free installation of OpenStack.

See also

  • More information on configuring and running the OpenStack Ansible Deployment can be found in the Rackspace documentation at http://docs.rackspace.com/
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset