How it works...

The syntax for ceph-objectstore-tool is:

ceph-objectstore-tool <options>

The values for <options> can be as follows:

  • --data-path: The path to the OSD
  • --journal-path: The path to the journal
  • --op: The operation
  • --pgid: The placement group ID
  • --skip-journal-replay: Use this when the journal is corrupted
  • --skip-mount-omap: Use this when the LevelDB data store is corrupted and unable to mount
  • --file: The path to the file, used with the import/export operation

To understand this tool better, let's take an example: a pool makes two copies of an object, and PGs are located on osd.1 and osd.2. At this point, if failure happens, the following sequence will occur:

  1. osd.1 goes down.
  2. osd.2 handles all the write operations in a degraded state.
  3. osd.1 comes up and peers with osd.2 for data replication.
  4. Suddenly, osd.2 goes down before replicating all the objects to osd.1.
  5. At this point, you have data on osd.1, but it's stale.

After troubleshooting, you will find that you can read the osd.2 data from the filesystem, but its osd service is not getting started. In such a situation, one should use the ceph-objectstore-tool to export/retrieve data from the failed osd. The ceph-objectstore-tool provides you with enough capability to examine, modify, and retrieve object data and metadata.

You should avoid using Linux tools such as cp and rsync for recovering data from a failed OSD, as these tools do not take all the necessary metadata into account, and the recovered object might be unusable!
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset