OSD CRUSH map

The CRUSH map holds a variety of information necessary for Ceph to distribute and place objects. Sometimes it is useful to find out if this distribution isn't working properly because of a wrong value in the CRUSH map. The CRUSH map can be downloaded as a file in a binary format that we decompile locally for viewing. It is also possible to dump the CRUSH map and rules directly from the command line, which is expedient for a quick check.

We can use osd crush dump to display the entire CRUSH map.

root@ceph-client0:~# ceph osd crush dump
{

"devices": [
{
"id": 0,
"name": "osd.0"
},
...
"rules": [
{
"rule_id": 0,
"rule_name": "replicated_ruleset",
"ruleset": 0,
...
"tunables": {
"choose_local_tries": 0,
"choose_local_fallback_tries": 0,
"choose_total_tries": 50,
"chooseleaf_descend_once": 1,
...

This might not always be useful depending on the information sought. A better way to dump individual rules is by using the osd crush rule dump subcommand.

We first list the CRUSH rules currently defined within our cluster:

root@ceph-client0:~# ceph osd crush rule ls
[
"replicated_ruleset"
]

Here we only have one rule, and it is named replicated_ruleset. Let's see what it does.

root@ceph-client0:~# ceph osd crush rule dump replicated_ruleset
{

"rule_id": 0,
"rule_name": "replicated_ruleset",
"ruleset": 0,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
}

This rule is applied to replicated pools and distributes one copy of all objects to separate hosts.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset