How to do it...

To create the Ceph cluster from the VSM dashboard, navigate to Cluster Management | Create Cluster, and then click on the Create Cluster button.

VSM Dashboard create cluster section

If you check preceding screenshot, version 2.2.0 has the Import Cluster tab. As 2.20 is still in beta, we need to use a couple of hacks:

  1. Disable MDSs and RGWs and restart vsm-api.

Open file /usr/lib/python2.7/site-packages/vsm/api/v1/clusters.py.

The file can be found at https://github.com/01org/virtual-storage-manager/blob/master/source/vsm/vsm/api/v1/clusters.py.

RGW is already disabled, we need to do it for MDS also:

        /etc/init.d/vsm-api restart
  1.  Disable MDSs and RGWs and restart vsm-scheduler:

Open file /usr/lib/python2.7/site-packages/vsm/scheduler/manager.py.

        /etc/init.d/vsm-scheduler restart
  1. Select all the nodes by clicking on the checkbox next to the ID, and finally, click on the Create Cluster button:
VSM Dashboard create cluster section after clicking on create cluster tab

The Ceph cluster's creation will take a few minutes. VSM will display very briefly what it's doing in the background under the status field of the dashboard as shown next:

After cleaning, it will mount disks, as shown under the status field of the dashboard in the following screenshot:

Once the Ceph cluster deployment is completed, VSM will display the node status as Active. But only monitor daemons will be up and OSDs will not be started. VSM creates the OSD data path as /var/lib/ceph/osd/osd$id but Ceph Jewel version expects the OSD's data path as /var/lib/ceph/osd/$cluster-$id:

VSM Dashboard after cluster got created with status as active
  1. We need to apply the following patch /usr/lib/ceph/ceph-osd-prestart.sh in all three VMs:
         data="/var/lib/ceph/osd/${cluster:-ceph}-$id"
data="/var/lib/ceph/osd/osd$id"
  1. Start the OSD's in all the three VM's one by one with the following command:
        $ systemctl start ceph-osd@$id

This will bring all the OSDs up and in.

  1. If PG's are stuck while creating, you might want to remove the default pool and recreate it:

Installed Ceph Jewel version:

  1. Finally, check the cluster status from Dashboard | Cluster Status:

IOPS, Latency, Bandwidth, and CPU details are also available in the dashboard:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset