Adding the Ceph OSD

Adding an OSD node to the Ceph cluster is an online process. To demonstrate this, we require a new virtual machine named ceph-node4 with three disks that will act as OSDs. This new node will then be added to our existing Ceph cluster.

Run the following commands from ceph-node1 until otherwise specified from any other node:

  1. Create a new node, ceph-node4, with three disks (OSD). You can follow the process of creating a new virtual machine with disks and the OS configuration, as mentioned in the Setting up a virtual infrastructure recipe in Chapter 1, Ceph – Introduction and Beyond, and make sure ceph-node1 can ssh into ceph-node4.
    Before adding the new node to the Ceph cluster, let's check the current OSD tree. As shown in the following screenshot, the cluster has three nodes and a total of nine OSDs:
# ceph osd tree
  1. Update the /etc/ansible/hosts file with ceph-node4 under the [osds] section:
  2. Verify that Ansible can reach the newly added ceph-node4 mentioned in /etc/ansible/hosts:
 root@ceph-node1 # ansible all -m ping
  1. List the available devices of ceph-node4 to be used as OSD's (sdb, sdc, and sdd):
root@ceph-node4 # lsblk
  1. Review the osds.yml file on ceph-node1 and validate that it lists the specified devices corresponding to the storage devices on the OSD node ceph-node4 and that journal_collocation is set to true:
  2. Run the Ansible playbook to deploy the OSD node ceph-node4 with three OSDs from the /usr/share/ceph-ansible directory:
root@ceph-node1 ceph-ansible # ansible-playbook site.yml
  1. As soon as you add new OSDs to the Ceph cluster, you will notice that the Ceph cluster starts rebalancing the existing data to the new OSDs. You can monitor rebalancing using the following command; after a while, you will notice that your Ceph cluster becomes stable:
# watch ceph -s
  1. Once the addition of the OSDs for ceph-node4 completes successfully, you will notice the cluster's new storage capacity:
# rados df
# ceph df
  1. Check the OSD tree; it will give you a better understanding of your cluster. You should notice the new OSDs under ceph-node4, which have been recently added:
# ceph osd tree

  1. This command outputs some valuable information such as OSD weight, any reweight that may be set, primary affinity that is set, which Ceph node hosts which OSD, and the UP/DOWN status of an OSD.

Just now, we learned how to add a new node to the existing Ceph cluster. It's a good time to understand that as the number of OSDs increases, choosing the right value for the PG becomes more important because it has a significant influence on the behavior of the cluster. Increasing the PG count on a large cluster can be an expensive operation. I encourage you to take a look at http://docs.ceph.com/docs/master/rados/operations/placement-groups/#choosing-the-number-of-placement-groups for any updated information on Placement Groups (PGs).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset