Now that the hosts have been configured correctly and all of the network interfaces are set up correctly, we can begin editing the configuration files that will be used when the Ansible Playbooks are run. In this recipe, we use Git to check the OpenStack Ansible Deployment (OSAD) Playbooks, the same ones originally developed by Rackspace and used by them to deploy OpenStack for its customers. We will be using the latest release at the time of writing: Git Tag 11.0.3 that refers to the Kilo release (Kilo refers to the letter K, which is the 11th letter in the alphabet).
The environment we will configure is shown in the following diagram:
It is important that the previous recipe, Automating OpenStack installations using Ansible – host configuration, has been followed and that all the configured networks are working as expected.
The environment will consist of three Controller nodes, one Storage node, one HA Proxy node, and two Compute nodes. Identify which of these will be the HA Proxy server and log into it as the root user. Out of convenience, this server will also be used to install OpenStack.
In this recipe, we are configuring the YAML configuration files that are used by the Playbooks. There are three files that we will be configuring: openstack_user_config.yml
, user_variables.yml
, and user_secrets.yml
. The three of these files combined describe our entire installation, from what server in our datacenter is running which OpenStack function to passwords and features to enable in OpenStack.
/opt/os-ansible-deployment
. This is achieved with the following command:cd /opt git clone -b 11.0.3 https://github.com/stackforge/os-ansible-deployment.git
/etc/openstack_deploy
, as shown here:cp -R /opt/os-ansible/etc/openstack_deploy /etc
/etc/openstack_deploy/openstack_user_config.yml
, which describes our physical environment. The information here is very specific to our installation describing network ranges, interfaces used, and the nodes that are running each service. The first section refers to the CIDRs used in our environment:--- cidr_networks: management: 172.16.0.0/16 tunnel: 172.29.240.0/22 used_ips: - 172.16.0.101,172.16.0.107 - 172.29.240.101,172.29.240.107
This file can be found online at https://github.com/OpenStackCookbook/OpenStackCookbook/blob/master/ansible-openstack/openstack_user_config.yml.
global_overrides
section. The global_overrides
section describes our Load Balance VIP addresses, our network bridges, and details of the Neutron networking. This a longer section that has the following in our environment. Note that we are pre-empting how things will be installed. Here, we set the IP addresses needed for a Load Balancer that does not yet exist in our environment. We will be using HA Proxy (installed in the next recipe) that will use these addresses:global_overrides: internal_lb_vip_address: 172.16.0.107 external_lb_vip_address: 192.168.1.107 lb_name: haproxy tunnel_bridge: "br-vxlan" management_bridge: "br-mgmt" provider_networks: - network: group_binds: - all_containers - hosts type: "raw" container_bridge: "br-mgmt" container_interface: "eth1" container_type: "veth" ip_from_q: "management" is_container_address: true is_ssh_address: true - network: group_binds: - neutron_linuxbridge_agent container_bridge: "br-vxlan" container_type: "veth" container_interface: "eth10" ip_from_q: "tunnel" type: "vxlan" range: "1:1000" net_name: "vxlan" - network: group_binds: - neutron_linuxbridge_agent container_bridge: "br-vlan" container_type: "veth" container_interface: "eth11" type: "vlan" range: "1:1" net_name: "vlan" - network: group_binds: - neutron_linuxbridge_agent container_bridge: "br-vlan" container_type: "veth" container_interface: "eth12" host_bind_override: "eth12" type: "flat" net_name: "flat"
# Shared infrastructure parts shared-infra_hosts: controller-01: ip: 172.16.0.101 controller-02: ip: 172.16.0.102 controller-03: ip: 172.16.0.103
# OpenStack infrastructure parts os-infra_hosts: controller-01: ip: 172.16.0.101 controller-02: ip: 172.16.0.102 controller-03: ip: 172.16.0.103
storage-infra
section is where the Cinder storage API will be found:# OpenStack Storage infrastructure parts storage-infra_hosts: controller-01: ip: 172.16.0.101 controller-02: ip: 172.16.0.102 controller-03: ip: 172.16.0.103
# Keystone Identity infrastructure parts identity_hosts: controller-01: ip: 172.16.0.101 controller-02: ip: 172.16.0.102 controller-03: ip: 172.16.0.103
# Compute Hosts compute_hosts: compute-01: ip: 172.16.0.104 compute-02: ip: 172.16.0.105
storage_hosts: storage: ip: 172.16.0.106 container_vars: cinder_backends: limit_container_types: cinder_volume lvm: volume_group: cinder-volumes volume_driver: cinder.volume.drivers.lvm.LVMISCSIDriver volume_backend_name: LVM_iSCSI
network_hosts: controller-01: ip: 172.16.0.101 controller-02: ip: 172.16.0.102 controller-03: ip: 172.16.0.103
# User defined Repository Hosts repo-infra_hosts: controller-01: ip: 172.16.0.101 controller-02: ip: 172.16.0.102 controller-03: ip: 172.16.0.103
haproxy_hosts: haproxy: ip: 172.16.0.107
/etc/openstack_deploy/user_variables.yml
file. This file is a much smaller file that describes OpenStack configuration options. For example, we specify in here what the backend filesystem is used for Glance, options for Nova, as well as options for Apache (which sits in front of Keystone):## Glance Options # Set default_store to "swift" if using Cloud Files # or swift backend or file to use NFS or local filesystem glance_default_store: file glance_notification_driver: noop ## Nova options nova_virt_type: kvm nova_cpu_allocation_ratio: 2.0 nova_ram_allocation_ratio: 1.0 ## Apache SSL Settings # These do not need to be configured unless you're creating # certificates for # services running behind Apache (currently, Horizon and # Keystone). ssl_protocol: "ALL -SSLv2 -SSLv3" # Cipher suite string from https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ ssl_cipher_suite: "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS"
This file can be found online at https://github.com/OpenStackCookbook/OpenStackCookbook/ansible-openstack/user_variables.yml.
/etc/openstack_deploy/user_secrets.yml
file, which holds the passphrases the services use within OpenStack. To configure this securely for our environment, and to provide randomly generated strings, execute the following command:cd /opt/os-ansible-deployment scripts/pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml
Congratulations! We're now ready to use the Ansible Playbooks to install OpenStack.
All configuration management and automated system installations require a lot of effort in the first few stages, which reduces a lot of time later on. Installing something as complex as OpenStack is no different.
After we fetched the Playbooks from GitHub, we configured the following files:
/etc/openstack_deploy/openstack_user_config.yml
: This file describes our physical environment (which includes networking, the hosts that are being used, and what services those hosts will run)/etc/openstack_deploy/user_variables.yml
: This file describes the configuration of OpenStack services, such as the CPU contention ratio for KVM/etc/openstack_deploy/user_secrets.yml
: This file has the service passphrases, such as the MariaDB root user passphrase for use with MariaDB, and the Nova service passphrase when the service gets created in KeystoneOnce these files have been edited to suit the environment, the Playbooks in the next recipe, Automating OpenStack installations using Ansible – running Playbooks, can be executed. Then, we can run through a hands-free installation of OpenStack.