Configuring a master zone

All RADOS Gateways in a multi-site v2 configuration will get their configuration from a radosgw daemon on a node within the master zone group and master zone. To configure your RADOS Gateways in a multi-site v2 configuration, you need to choose a radosgw instance to configure the master zone group and master zone. You should be using the us-east-1 RGW instance to configure your master zone:

  1. Create an RGW keyring in the /etc/ceph path and check if you are able to access the cluster with user RGW Cephx:
        # cp /var/lib/ceph/radosgw/ceph-rgw.us-east-1/
keyring /etc/ceph/ceph.client.rgw.us-east-1.keyring
# cat /etc/ceph/ceph.client.rgw.us-east-1.keyring
# ceph -s --id rgw.us-east-1

Now you should be able to use this RGW Cephx user to run radosgw-admin commands in cluster 1.

  1. Create the RGW multi-site v2 realm. Run the following command in the us-east-1 RGW node to create a realm:
        # radosgw-admin realm create --rgw-realm=cookbookv2 
--default --id rgw.us-east-1

You can ignore the error message given in the preceding screenshot; it will be fixed in the future release of Jewel. It is a known issue; it is not an error, but an information message declared as an error message. This will not cause any issues in configuring RGW multi-site v2.

  1. Create a master zone group. An RGW realm must have at least one RGW zone group, which will serve as the master zone group for the realm.

Run the following command in the us-east-1 RGW node to create a master zone group:

        # radosgw-admin zonegroup create --rgw-zonegroup=us 
--endpoints=http://us-east-1.cephcookbook.com:8080
--rgw-realm=cookbookv2 --master --default
--id rgw.us-east-1
  1. Create a master zone. An RGW zone group must have at least one RGW zone. Run the following command in the us-east-1 RGW node to create a master zone:
        # radosgw-admin zone create --rgw-zonegroup=us 
--rgw-zone=us-east-1 --master --default
--endpoints=http://us-east-1.cephcookbook.com:8080
--id rgw.us-east-1
  1. Remove default zone group and zone information from cluster 1:
        # radosgw-admin zonegroup remove --rgw-zonegroup=default
--rgw-zone=default --id rgw.us-east-1
# radosgw-admin zone delete --rgw-zone=default
--id rgw.us-east-1
# radosgw-admin zonegroup delete --rgw-zonegroup=default
--id rgw.us-east-1

Finally, update the period with the new us zone group and us-east-1 zone which will be used for multi-site v2:

        # radosgw-admin period update --commit --id rgw.us-east-1
  1. Remove the RGW default pools:
        # for i in `ceph osd pool ls --id rgw.us-east-1 | 
grep default.rgw`; do ceph osd pool delete $i $i
--yes-i-really-really-mean-it --id rgw.us-east-1; done
  1. Create an RGW multi-site v2 system user. In the master zone, create a system user to establish authentication between multi-site radosgw daemons:
        # radosgw-admin user create --uid="replication-user" 
--display-name="Multisite v2 replication user"
--system --id rgw.us-east-1
Make a note of the access key and secret key for the system user named "replication-user" because you need to use the same access key and secret key in the secondary zone.
  1. Finally, update the period with this system user information:
      # radosgw-admin zone modify --rgw-zone=us-east-1 
--access-key=ZYCDNTEASHKREV4X9BUJ
--secret=4JbC4OC4vC6fy6EY6Pfp8rPZMrpDnYmETZxNyyu9
--id rgw.us-east-1
# radosgw-admin period update --commit --id rgw.us-east-1

  1. You also need to update the [client.rgw.us-east-1] section of ceph.conf with the rgw_zone=us-east-1 option:
  1. Restart the us-east-1 RGW daemon:
        #  systemctl restart ceph-radosgw.target
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset