Using Container Synchronization between two Swift Clusters

Replicating container content from one Swift Cluster to another in a remote location is a useful feature for disaster recovery and running active/active datacenters. This feature allows a user to upload objects as normal to a particular container, and have those contents upload to a nominated container in a remote cluster automatically.

Getting ready

Ensure you are logged in to both swift proxy servers that will be used for the replication. An example of this feature can be found with the Swift Vagrant environment at https://github.com/OpenStackCookbook/VagrantSwift. If you created these nodes with this environment, ensure that you have both swift and swift2 running and you have a shell on both by executing the following command:

vagrant ssh swift
vagrant ssh swift2 

How to do it...

To set up Container Sync replication, carry out the following steps:

  1. On both Proxy Servers, edit /etc/swift/proxy-server.conf to add in the container_sync to the pipeline:
    [pipeline:main]
    # Order of execution of modules defined below
    pipeline = catch_errors healthcheck cache container_sync authtoken keystone proxy-server
    [filter:container_sync]
    use = egg:swift#container_sync
  2. On each Proxy Server, create /etc/swift/container-sync-realms.conf with the following contents:
    [realm1]
    key = realm1key
    cluster_swift = http://swift:8080/v1/
    cluster_swift2 = http://swift2:8080/v1/
  3. On each Proxy Server, issue the following command to pick up the changes:
    swift-init proxy-server restart
    
  4. On the first Swift cluster (swift), identify the account on the second cluster (swift2), where the first cluster will sync:
    swift --insecure -V2.0 -A https://swift2:5000/v2.0 -U cookbook:admin -K openstack
    

    The preceding command shows an output similar to the following (note the Account: line):

    How to do it...

    Tip

    Note that we're using the --insecure flag on this command as Swift2 is running a self-signed certificate and we don't have access to the generated CA file from our Swift node. If you copy this file across so it is accessible, you can omit this flag.

  5. Set up a container called container1 on the first swift cluster that synchronizes content to a container called container2 on the second cluster, swift2:
    swift -V2.0 -A https://controller:5000/v2.0 
        -U cookbook:admin -K openstack post 
        -t '//realm1/swift2/AUTH_d81683a9a2dd46cf9cac88c5b8eaca1a/container2' 
        -k 'myKey' container1
    
  6. Set up the container2 container referenced in the previous step on the second cluster that can also synchronize content back to container1 on the first cluster (two-way sync) as follows. Note that we're running this command from the node called swift and remotely creating the container on swift2:
    swift --insecure -V2.0 -A https://swift2:5000/v2.0 
        -U cookbook:admin 
        -K openstack 
        post container2
    
  7. Upload a file to container1 on swift1:
    swift -V2.0 -A https://controller:5000/v2.0
        -U cookbook:admin -K openstack 
        upload container1 my_example_file
    
  8. You can now view the contents on container2 on swift2 that will show the same files listed in container1 on swift.

    Tip

    If the file hasn't appeared yet on container2 on the second swift cluster, run the following:

    swift-init container-sync once

How it works...

Container Synchronization is an excellent feature when multiple datacenters are running and our disaster recovery plan requires data to be kept consistent in each datacenter. Container sync operates at the container level, so we can control where our data is synced to.

To enable this feature, we modify the pipeline in the /etc/swift/proxy-server.conf file to notify Swift to run Container Sync jobs.

Once configured, we create a file called /etc/swift/container-sync-realms.conf that has the following structure:

[realm_name]
key = realm_name_key
cluster_name_of_cluster = http://swift1_proxy_server:8080/v1/
cluster_name_of_cluster2 = http://swift2_proxy_server:8080/v1/

This structure is important and is referenced when we create the synchronization on the containers shown in the following syntax:

swift post 
    -t '//realm_name/name_of_cluster2/AUTH_UUID/container_name' 
    -k 'mykey' container_name_to_be_syncd

The AUTH_UUID comes from the following command shown that gives us the Swift account associated with the user on the remote (receiving) Swift:

swift -V2.0 -A https://cluster2:5000/v2.0 
    -U tenant:user -K password 
    stat

The key is then used—along with the key references in the /etc/swift/container-sync-realms.conf file—to create our shared secret that is used for authentication between the containers.

As a result of this configuration, when we upload a file to the container created on our first cluster that has been instructed to sync with the second, the file will automatically sync in the background.

There's more…

Container Synchronization is one approach that allows different Swift clusters to replicate data between them. Another approach is using Global Clusters. For more information, visit https://swiftstack.com/blog/2013/07/02/swift-1-9-0-release/.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset