Configuring HA Proxy for high availability

The steps in the preceding recipe configure a two-node HA Proxy setup that we can use as a MariaDB endpoint to place in our OpenStack configuration files. Having a single HA Proxy acting as a Load Balancer to a highly available multimaster cluster is not recommended, as the Load Balancer then becomes our single point of failure. To overcome this, we can simply install and configure keepalived, which gives us the ability to share a FloatingIP address between our HA Proxy servers. This allows us to use this FloatingIP address as the address to use for our OpenStack services.

Getting ready

Log in to the two HA Proxy servers created in the previous recipe as root.

How to do it...

As we have two identical HA Proxy servers running—one on address 172.16.0.248, and another at 172.16.0.249—we will assign a floating "virtual IP" address of 172.16.0.251, which is able to attach itself to one of the servers and switch over to the other in the event of a failure. To do this, follow these steps:

  1. Having a single HA Proxy server sitting in front of our multimaster MariaDB cluster makes the HA Proxy server our single point of failure. To overcome this, we use a simple solution provided by keepalived for Virtual Redundant Router Protocol (VRRP) management. To do this, we need to install keepalived on both of our HA Proxy servers. As we did before, we will configure one server, and then repeat the steps for our second server. We do this as follows:
    sudo apt-get update
    sudo apt-get install keepalived
    
  2. To allow running software to bind to an address that does not physically exist on our server, we add in an option to sysctl.conf. Add the following line to /etc/sysctl.conf:
    net.ipv4.ip_nonlocal_bind=1
  3. To pick up the change, issue the following command:
    sudo sysctl -p
    
  4. We can now configure keepalived. To do this, we create a /etc/keepalived/keepalived.conf file with the following contents:
    vrrp_script chk_haproxy {
      script "killall -0 haproxy" # verify the pid exists ornot
      interval 2        # check every 2 seconds
      weight 2          # add 2 points if OK
    }
    
    vrrp_instance VI_1 {
      interface eth1    # interface to monitor
      state MASTER
      virtual_router_id 51  # Assign one ID for this router
      priority 101          # 101 on master, 100 on backup
      virtual_ipaddress {
        172.16.0.251   # the virtual IP
      }
      track_script {
        chk_haproxy
      }
    }
  5. We can now start up keepalived on this server by issuing the following command:
    sudo service keepalived start
    
  6. With keepalived now running on our first HA Proxy server, which we have designated as the master node, we repeat the previous steps for our second HA Proxy server with only two changes to the keepalived.conf file (state should be set to BACKUP and priority should be set to 100) to give the complete file on our second host the following contents:
    vrrp_script chk_haproxy {
      script "killall -0 haproxy" # verify the pid exists or not
      interval 2       # check every 2 seconds
      weight 2         # add 2 points if OK
    }
    
    vrrp_instance VI_1 {
      interface eth1   # interface to monitor
      state BACKUP
      virtual_router_id 51  # Assign one ID for this router
      priority 100          # 101 on master, 100 on backup
      virtual_ipaddress {
        172.16.0.251  # the virtual IP
      }
      track_script {
        chk_haproxy
      }
    }
  7. Start up keepalived on this second node, and they will be acting in coordination with each other. So, if you powered off the first HA Proxy server, the second server will pick up the FloatingIP address 172.16.0.251. After 2 seconds, new connections can be made to our MariaDB cluster without disruption. We can test whether the HA Proxy and MariaDB with Galera setup is working by connecting to the database cluster with the following command:
    mysql -uroot -popenstack -h 172.16.0.251
    
  8. To check whether keepalived is working correctly, view the messages in /var/log/syslog on each of our nodes. Execute the following command:
    sudo grep VRRP /var/log/syslog
    

    On the node that currently has the FloatingIP address, you will see the following output:

    How to do it...

    On the node that doesn't have the FloatingIP assigned, you will see the following output:

    How to do it...

OpenStack backend configuration using FloatingIP address

With both HA Proxy servers running the same HA Proxy configuration and both running keepalived, we can use the virtual_ipaddress address (our FloatingIP address) configured as the address that we would then connect to and use in our configuration files. In OpenStack, we would identify each of the configuration files that refer to our database and change the following configuration to use our FloatingIP address of 172.16.0.251 where appropriate:

  1. First, we must ensure that our new Galera cluster has all the usernames and passwords that we need for our OpenStack environment. In the test vagrant environment accompanying the book at https://github.com/OpenStackCookbook/OpenStackCookbook.git, we configure our database usernames to be the same as the service name, for example, neutron, and password to be openstack. To replicate this, execute the following commands to create all users and passwords:
    USERS="nova
    neutron
    keystone
    glance
    cinder
    heat"
    
    HAPROXIES="172.16.0.248
    172.16.0.249"
    
    for U in ${USERS}
    do
      for H in ${HAPROXIES}
      do
        mysql -u root -h localhost -e "GRANT ALL ON *.* to ${U}@"${H}" IDENTIFIED BY "openstack";"
      done
    done
    

    Tip

    It is recommended that you use stronger, random passwords in production.

  2. We can now use these details to replace the SQL connection lines in our configuration files used in OpenStack. Some examples are as follows:
    # Nova
    # /etc/nova/nova.conf
    sql_connection = mysql://nova:[email protected]/nova
    
    # Keystone
    # /etc/keystone/keystone.conf
    connection = mysql://keystone:[email protected]/keystone
    
    # Glance
    # /etc/glance/glance-registry.conf
    connection = mysql://glance:[email protected]/glance
    
    # Neutron
    # /etc/neutron/neutron.conf
    [DATABASE]
    connection = mysql://neutron:[email protected]/neutron
    
    # Cinder
    # /etc/cinder/cinder.conf
    connection = mysql://cinder:[email protected]/cinder

How it works...

We install and configure keepalived, a service that gives us the ability to have an IP address that can float between each of our HA Proxy servers. In the event of a failure, it will be promoted and attached to the remaining running server.

We configure keepalived by editing the /etc/keepalived/keepalived.conf file. These look very similar on both nodes but with one difference—we specify the MASTER and the slave nodes.

On the MASTER node (it can be any nominated instance), we chose the first HA Proxy server. This is illustrated in the following code:

vrrp_instance VI_1 {
  interface eth1    # interface to monitor
  state MASTER
  virtual_router_id 51  # Assign one ID for this route
  priority 101          # 101 on master, 100 on backup
  virtual_ipaddress {
    172.16.0.251   # the virtual IP
  }

On the slave node, the code is as follows:

vrrp_instance VI_1 {
  interface eth1    # interface to monitor
  state BACKUP
  virtual_router_id 51  # Assign one ID for this route
  priority 100          # 101 on master, 100 on backup
  virtual_ipaddress {
    172.16.0.251   # the virtual IP
  }

In our example, the IP address we use that can float between our instances is 172.16.0.251. This is configured as shown in the preceding virtual_ipaddress code snippet.

When we start keepalived on both servers, the MASTER node gets the 172.16.0.251 IP address. If we powered this host off, or it unexpectedly failed, the other HA Proxy server will inherit this IP address. This gives us our HA feature to our HA Proxy servers.

With this in place, we then ensure that our new database has all the relevant usernames and passwords configured and we replace all references to our non-HA configuration to our new MariaDB cluster.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset