This recipe represents two nodes running both Glance and Keystone, controlled by Pacemaker with Corosync in active/passive mode that allows for a failure of a single node. In a production environment, it is recommended that a cluster consist of at least three nodes to ensure resiliency and consistency in the case of a single node failure.
For this recipe, we will assume the previous recipe, Installing and configuring Pacemaker with Corosync, has been followed to give us two controllers called controller1
and controller2
, with a FloatingIP
address 172.16.0.253
provided by Corosync.
To increase the resilience of OpenStack services, carry out the following steps:
controller1
, we should be able to query Keystone using both its own IP address (172.16.0.221
) and the FloatingIP
(172.16.0.253
) from a client that has access to the OpenStack environment using the following code:# Assigned IP (192.168.100.221) export OS_TENANT_NAME=cookbook export OS_USERNAME=admin export OS_PASSWORD=openstack export OS_AUTH_URL=https://192.168.100.221:5000/v2.0/ export OS_KEY=/vagrant/cakey-controller1.pem export OS_CACERT=/vagrant/ca-controller1.pem keystone user-list # FloatingIP (Keepalived and HA Proxy) export OS_AUTH_URL=https://172.16.0.253:5000/v2.0/ keystone user-list
/etc/keystone/keystone.conf
file from the first host, put it in place on the second node, and then restart the keystone
service. There is no further work required, as the database has already been populated with the endpoints and users when the install was completed on the first node. Restart the service to connect to the database, as follows:sudo stop keystone sudo start keystone
keystone
service on its own IP address.# Second Node export OS_AUTH_URL=http://172.16.0.112:5000/v2.0/ keystone user-list
For Glance to be able to run across multiple nodes, it must be configured with a shared storage backend (such as Swift) and be backed by a database backend (such as MySQL). On the first host, install and configure Glance, as described in Chapter 2, Glance – OpenStack Image Service. After that, follow these steps:
sudo apt-get install glance python-swift
/etc/glance
to the second host, and start the glance-api
and glance-registry
services on both nodes:sudo start glance-api sudo start glance-registry
FloatingIP
address that is assigned to our first node, by using this code:# First node glance -I admin -K openstack -T cookbook -N http://172.16.0.111:5000/v2.0 index # Second node glance -I admin -K openstack -T cookbook -N http://172.16.0.112:5000/v2.0 index # FloatingIP glance -I admin -K openstack -T cookbook -N http://172.16.0.253:5000/v2.0 index
With Keystone and Glance running on both nodes, we can now configure Pacemaker to take control of this service so that we can ensure Keystone and Glance are running on the appropriate node when the other node fails. The steps are as follows:
/etc/init/keystone.override
, /etc/init/glance-api.override
and /etc/init/glance-registry.override
with just the keyword manual
in:wget https://raw.github.com/madkiss/keystone/ha/tools/ocf/keystone wget https://raw.github.com/madkiss/glance/ha/tools/ocf/glance-api wget https://raw.github.com/madkiss/glance/ha/tools/ocf/glance-registry sudo mkdir -p /usr/lib/ocf/resource.d/openstack sudo cp keystone glance-api glance-registry /usr/lib/ocf/resource.d/openstack sudo chmod 755 /usr/lib/ocf/resource.d/openstack/*
sudo crm ra list ocf openstack
sudo crm cib new conf-keystone sudo crm configure property stonith-enabled=false sudo crm configure property no-quorum-policy=ignore sudo crm configure primitive p_keystone ocf:openstack:keystone params config="/etc/keystone/keystone.conf" os_auth_url="http://localhost:5000/v2.0/" os_password="openstack" os_tenant_name="cookbook" os_username="admin" user="keystone" client_binary="/usr/bin/keystone" op monitor interval="5s" timeout="5s" sudo crm cib use live sudo crm cib commit conf-keystone
sudo crm cib new conf-glance-api sudo crm configure property stonith-enabled=false sudo crm configure property no-quorum-policy=ignore sudo crm configure primitive p_glance_api ocf:openstack:glance-api params config="/etc/glance/glance-api.conf" os_auth_url="http://localhost:5000/v2.0/" os_password="openstack" os_tenant_name="cookbook" os_username="admin" user="glance" client_binary="/usr/bin/glance" op monitor interval="5s" timeout="5s" sudo crm cib use live sudo crm cib commit conf-glance-api sudo crm cib new conf-glance-registry sudo crm configure property stonith-enabled=false sudo crm configure property no-quorum-policy=ignore sudo crm configure primitive p_glance_registry ocf:openstack:glance-registry params config="/etc/glance/glance-registry.conf" os_auth_url="http://localhost:5000/v2.0/" os_password="openstack" os_tenant_name="cookbook" os_username="admin" user="glance" op monitor interval="5s" timeout="5s" sudo crm cib use live sudo crm cib commit conf-glance-registry
sudo crm_mon -1
This brings back something similar to the following output:
FloatingIP
address 172.16.0.253
for both Glance and Keystone services. With this in place, we can bring down the interface on our first node and still have our Keystone and Glance services available on this FloatingIP
address.We now have Keystone and Glance running on two separate nodes, where a node can fail and services will still be available.
The configuration of Pacemaker is predominantly done with the crm
tool. This allows us to script the configuration. If invoked on its own, it allows us to invoke an interactive shell that we can use to edit, add, and remove services, as well as query the status of the cluster. This is a very powerful tool to control an equally powerful cluster manager.
With both nodes running Keystone and Glance, and with Pacemaker and Corosync running and accessible on the FloatingIP
provided by Corosync, we configure Pacemaker to control the running of the Keystone and Glance services by using an Open Cluster Framework (OCF) agent written specifically for this purpose. The OCF agent uses a number of parameters that will be familiar to us, and they require the same username, password, tenant, and endpoint URL that we would use in a client to access that service.
A timeout of 5 seconds was set up for both the agent and when the FloatingIP
address moves to another host.
After this configuration, we have a Keystone and Glance active/passive configuration, as shown in the diagram: