How to do it...

Once you have installed the mentioned software, we will proceed with virtual machine creation:

  1. git clone Ceph-Designing-and-Implementing-Scalable-Storage-Sytems repository to your VirtualBox host machine:
      $ git clone https://github.com/PacktPublishing/Ceph-Designing-and-Implementing-Scalable-Storage-Systems
  1. Under the directory, you will find vagrantfile, which is our Vagrant configuration file that basically instructs VirtualBox to launch the VMs that we require at different stages of this book. Vagrant will automate the VM's creation, installation, and configuration for you; it makes the initial environment easy to set up:
      $ cd Ceph-Designing-and-Implementing-Scalable-Storage-Systems ; ls -l
_
  1. Next, we will launch three VMs using Vagrant; they are required throughout this chapter:
      $ vagrant up ceph-node1 ceph-node2 ceph-node3
If the default Vagrant provider is not set to VirtualBox, set it to VirtualBox. To make it permanent, it can be added to user .bashrc file:
# export VAGRANT_DEFAULT_PROVIDER=virtualbox
# echo $VAGRANT_DEFAULT_PROVIDER
  1. Run vagrant up ceph-node1 ceph-node2 ceph-node3.
  2. Check the status of your virtual machines:
      $ vagrant status ceph-node1 ceph-node2 ceph-node3
The username and password that Vagrant uses to configure virtual machine is vagrantand Vagrant has sudo rights. The default password for the root user is vagrant.
  1. Vagrant will, by default, set up hostnames as ceph-node<node_number> and IP address subnet as 192.168.1.X and will create three additional disks that will be used as OSDs by the Ceph cluster. Log in to each of these machines one by one and check whether the hostname, networking, and additional disks have been set up correctly by Vagrant:
      $ vagrant ssh ceph-node1
$ ip addr show
$ sudo fdisk -l
$ exit

  1. Vagrant is configured to update hosts file on the VMs. For convenience, update the /etc/hosts file on your host machine with the following content:
      192.168.1.101 ceph-node1
192.168.1.102 ceph-node2
192.168.1.103 ceph-node3
  1. Update all the three VM's to the latest CentOS release and reboot to the latest kernel.
  2. Generate root SSH keys for ceph-node1 and copy the keys to ceph-node2 and ceph-node3. The password for the root user on these VMs is vagrant. Enter the root user password when asked by the ssh-copy-id command and proceed with the default settings:
      $ vagrant ssh ceph-node1
$ sudo su -
# ssh-keygen
# ssh-copy-id root@ceph-node1
# ssh-copy-id root@ceph-node2
# ssh-copy-id root@ceph-node3
  1. Once the SSH keys are copied to ceph-node2 and ceph-node3, the root user from ceph-node1 can do an ssh login to VMs without entering the password:
      # ssh ceph-node2 hostname
# ssh ceph-node3 hostname
  1. Enable ports that are required by the Ceph MON, OSD, and MDS on the operating system's firewall. Execute the following commands on all VMs:
      # firewall-cmd --zone=public --add-port=6789/tcp --permanent
# firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent
# firewall-cmd --reload
# firewall-cmd --zone=public --list-all
  1. Install and configure NTP on all VMs:
      # yum install ntp ntpdate -y
# ntpdate pool.ntp.org
# systemctl restart ntpdate.service
# systemctl restart ntpd.service
# systemctl enable ntpd.service
# systemctl enable ntpdate.service
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset