How it works...

Since ceph-medic performs checks against the entire cluster it needs to know the nodes that exist in your cluster as well as have password-less SSH access to the nodes in your cluster. If your cluster is deployed via ceph-ansible then your nodes are already configured and this will not be required, if not, then you will need to point ceph-medic towards an inventory file and SSH config file.

The syntax for the ceph-medic command is as follows:

        # ceph-medic --inventory /path/to/hosts 
--ssh-config /path/to/ssh_config check

The inventory file is a typical Ansible inventory file and can be created in the current working directory where the ceph-medic check is run. The file must be called hosts and the following standard host groups are supported: mons, osds, rgws, mdss, mgrs, and clients. An example hosts file would look as follows:

[mons]
ceph-node1
ceph-node2
ceph-node3

[osds]
ceph-node1
ceph-node2
ceph-node3

[mdss]
ceph-node2

The SSH config file allows non-interactive SSH access to specific accounts that can sudo without a password prompt. This file can be created in the working directory where the ceph-medic check is run. An example SSH config file on a cluster of Vagrant VMs would look as follows:

Host ceph-node1
HostName 127.0.0.1
User vagrant
Port 2200
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/andrewschoen/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL

Host ceph-node2
HostName 127.0.0.1
User vagrant
Port 2201
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/andrewschoen/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset