How to do it...

In this recipe, we are going to configure OpenStack as a Ceph client, which will be later used to configure Cinder, Glance, and Nova:

  1. Install ceph-common the Ceph client-side package in OpenStack node and then copy ceph.conf from ceph-node1 to the OpenStack node – os-nod1.
  2. Create an SSH tunnel between the monitor node ceph-node1 and OpenStack os-node1:
  1. Copy the Ceph repository file from ceph-node1 to os-node1:
  1. Install ceph-common package in os-node1:
  1. Once it completes, you will have the following message:
  1. Copy ceph.conf from ceph-node1 to os-node1:
  1. Create Ceph pools for Cinder, Glance, and Nova from monitor node ceph-node1. You may use any available pool, but it's recommended that you create separate pools for OpenStack components:
         # ceph osd pool create images 128
# ceph osd pool create volumes 128

# ceph osd pool create vms 128
We have used 128 as PG number for these three pools. For the PG calculation, for your pools, you can use Ceph PGcalc tool http://ceph.com/pgcalc/.
  1. Set up client authentication by creating a new user for Cinder and Glance:
        # ceph auth get-or-create client.cinder mon 'allow r' osd 
'allow class-read object_prefix rbd_children, allow rwx pool=volumes,
allow rwx pool=vms, allow rx pool=images'
        # ceph auth get-or-create client.glance mon 'allow r' osd 
'allow class-read object_prefix rbd_children, allow rwx pool=images'
  1. Add the keyrings to os-node1 and change their ownership:
        # ceph auth get-or-create client.glance | 
ssh os-node1 sudo tee /etc/ceph/ceph.client.glance.keyring
# ssh os-node1 sudo chown glance:glance
/etc/ceph/ceph.client.glance.keyring
# ceph auth get-or-create client.cinder |
ssh os-node1 sudo tee /etc/ceph/ceph.client.cinder.keyring
# ssh os-node1 sudo chown cinder:cinder
/etc/ceph/ceph.client.cinder.keyring
  1. The libvirt process requires accessing the Ceph cluster while attaching or detaching a block device from Cinder. We should create a temporary copy of the client.cinder key that will be needed for the Cinder and Nova configuration later in this chapter:
        # ceph auth get-key client.cinder | 
ssh os-node1 tee /etc/ceph/temp.client.cinder.key
  1. At this point, you can test the previous configuration by accessing the Ceph cluster from os-node1 using the client.glance and client.cinder Ceph users.
    Log in to os-node1 and run the following commands:
        $ vagrant ssh openstack-node1
$ sudo su -
# ceph -s --id glance
# ceph -s --id cinder
  1. Finally, generate UUID, then create, define, and set the secret key to libvirt and remove temporary keys:
    1. Generate a UUID by using the following command:
                # cd /etc/ceph
# uuidgen
    1. Create a secret file and set this UUID number to it:
                cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>e279566e-bc97-46d0-bd90-68080a2a0ad8</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF

Make sure that you use your own UUID generated for your environment.

    1. Define the secret and keep the generated secret value safe. We will require this secret value in the next steps:
                # virsh secret-define --file secret.xml
    1. Set the secret value that was generated in the last step to virsh and delete temporary files. Deleting the temporary files is optional; it's done just to keep the system clean:
                # virsh secret-set-value 
--secret e279566e-bc97-46d0-bd90-68080a2a0ad8
--base64 $(cat temp.client.cinder.key) &&
rm temp.client.cinder.key secret.xml
# virsh secret-list
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset