0%

Book Description

Get to grips with the unified, highly scalable distributed storage system and learn how to design and implement it.

Key Features

  • Explore Ceph's architecture in detail
  • Implement a Ceph cluster successfully and gain deep insights into its best practices
  • Leverage the advanced features of Ceph, including erasure coding, tiering, and BlueStore

Book Description

This Learning Path takes you through the basics of Ceph all the way to gaining in-depth understanding of its advanced features. You'll gather skills to plan, deploy, and manage your Ceph cluster. After an introduction to the Ceph architecture and its core projects, you'll be able to set up a Ceph cluster and learn how to monitor its health, improve its performance, and troubleshoot any issues. By following the step-by-step approach of this Learning Path, you'll learn how Ceph integrates with OpenStack, Glance, Manila, Swift, and Cinder. With knowledge of federated architecture and CephFS, you'll use Calamari and VSM to monitor the Ceph environment. In the upcoming chapters, you'll study the key areas of Ceph, including BlueStore, erasure coding, and cache tiering. More specifically, you'll discover what they can do for your storage system. In the concluding chapters, you will develop applications that use Librados and distributed computations with shared object classes, and see how Ceph and its supporting infrastructure can be optimized.

By the end of this Learning Path, you'll have the practical knowledge of operating Ceph in a production environment.

This Learning Path includes content from the following Packt products:

  • Ceph Cookbook by Michael Hackett, Vikhyat Umrao and Karan Singh
  • Mastering Ceph by Nick Fisk
  • Learning Ceph, Second Edition by Anthony D'Atri, Vaibhav Bhembre and Karan Singh

What you will learn

  • Understand the benefits of using Ceph as a storage solution
  • Combine Ceph with OpenStack, Cinder, Glance, and Nova components
  • Set up a test cluster with Ansible and virtual machine with VirtualBox
  • Develop solutions with Librados and shared object classes
  • Configure BlueStore and see its interaction with other configurations
  • Tune, monitor, and recover storage systems effectively
  • Build an erasure-coded pool by selecting intelligent parameters

Who this book is for

If you are a developer, system administrator, storage professional, or cloud engineer who wants to understand how to deploy a Ceph cluster, this Learning Path is ideal for you. It will help you discover ways in which Ceph features can solve your data storage problems. Basic knowledge of storage systems and GNU/Linux will be beneficial.

Table of Contents

  1. Title Page
  2. Copyright
    1. Ceph: Designing and Implementing Scalable Storage Systems
  3. About Packt
    1. Why Subscribe?
    2. Packt.com
  4. Contributors
    1. About the Authors
    2. Packt Is Searching for Authors Like You
  5. Preface
    1. Who This Book Is For
    2. What This Book Covers
    3. To Get the Most out of This Book
      1. Download the Example Code Files
      2. Conventions Used
    4. Get in Touch
      1. Reviews
  6. Ceph - Introduction and Beyond
    1. Introduction
    2. Ceph – the beginning of a new era
      1. Software-defined storage – SDS
      2. Cloud storage
      3. Unified next-generation storage architecture
    3. RAID – the end of an era
      1. RAID rebuilds are painful
      2. RAID spare disks increases TCO
      3. RAID can be expensive and hardware dependent
      4. The growing RAID group is a challenge
      5. The RAID reliability model is no longer promising
    4. Ceph – the architectural overview
    5. Planning a Ceph deployment
    6. Setting up a virtual infrastructure
      1. Getting ready
      2. How to do it...
    7. Installing and configuring Ceph
      1. Creating the Ceph cluster on ceph-node1
      2. How to do it...
    8. Scaling up your Ceph cluster
      1. How to do it…
    9. Using the Ceph cluster with a hands-on approach
      1. How to do it...
  7. Working with Ceph Block Device
    1. Introduction
    2. Configuring Ceph client
      1. How to do it...
    3. Creating Ceph Block Device
      1. How to do it...
    4. Mapping Ceph Block Device
      1. How to do it...
    5. Resizing Ceph RBD
      1. How to do it...
    6. Working with RBD snapshots
      1. How to do it...
    7. Working with RBD clones
      1. How to do it...
    8. Disaster recovery replication using RBD mirroring
      1. How to do it...
    9. Configuring pools for RBD mirroring with one way replication
      1. How to do it...
    10. Configuring image mirroring
      1. How to do it...
    11. Configuring two-way mirroring
      1. How to do it...
      2. See also
    12. Recovering from a disaster!
      1. How to do it...
  8. Working with Ceph and OpenStack
    1. Introduction
    2. Ceph – the best match for OpenStack
    3. Setting up OpenStack
      1. How to do it...
    4. Configuring OpenStack as Ceph clients
      1. How to do it...
    5. Configuring Glance for Ceph backend
      1. How to do it…
    6. Configuring Cinder for Ceph backend
      1. How to do it...
    7. Configuring Nova to boot instances from Ceph RBD
      1. How to do it…
    8. Configuring Nova to attach Ceph RBD
      1. How to do it...
  9. Working with Ceph Object Storage
    1. Introduction
    2. Understanding Ceph object storage
    3. RADOS Gateway standard setup, installation, and configuration
      1. Setting up the RADOS Gateway node
      2. How to do it…
      3. Installing and configuring the RADOS Gateway
      4. How to do it…
    4. Creating the radosgw user
      1. How to do it…
      2. See also…
    5. Accessing the Ceph object storage using S3 API
      1. How to do it…
      2. Configuring DNS
      3. Configuring the s3cmd client
        1. Configure the S3 client (s3cmd) on client-node1
    6. Accessing the Ceph object storage using the Swift API
      1. How to do it...
    7. Integrating RADOS Gateway with OpenStack Keystone
      1. How to do it...
    8. Integrating RADOS Gateway with Hadoop S3A plugin 
      1. How to do it...
  10. Working with Ceph Object Storage Multi-Site v2
    1. Introduction
    2. Functional changes from Hammer federated configuration
    3. RGW multi-site v2 requirement
    4. Installing the Ceph RGW multi-site v2 environment 
      1. How to do it...
    5. Configuring Ceph RGW multi-site v2
      1. How to do it...
        1. Configuring a master zone
        2. Configuring a secondary zone
        3. Checking the synchronization status 
    6. Testing user, bucket, and object sync between master and secondary sites
      1. How to do it...
  11. Working with the Ceph Filesystem
    1. Introduction
    2. Understanding the Ceph Filesystem and MDS
    3.  Deploying Ceph MDS
      1. How to do it...
    4. Accessing Ceph FS through kernel driver
      1. How to do it...
    5. Accessing Ceph FS through FUSE client
      1. How to do it...
    6. Exporting the Ceph Filesystem as NFS
      1. How to do it...
    7. Ceph FS – a drop-in replacement for HDFS
  12. Operating and Managing a Ceph Cluster
    1. Introduction
    2. Understanding Ceph service management
    3. Managing the cluster configuration file
      1. How to do it...
        1. Adding monitor nodes to the Ceph configuration file
        2. Adding an MDS node to the Ceph configuration file
        3. Adding OSD nodes to the Ceph configuration file
    4. Running Ceph with systemd
      1. How to do it...
        1. Starting and stopping all daemons
        2. Querying systemd units on a node
        3. Starting and stopping all daemons by type
        4. Starting and stopping a specific daemon
    5. Scale-up versus scale-out
    6. Scaling out your Ceph cluster
      1. How to do it...
        1. Adding the Ceph OSD
        2. Adding the Ceph MON
      2. There's more...
    7. Scaling down your Ceph cluster
      1. How to do it...
        1. Removing the Ceph OSD
        2. Removing the Ceph MON
    8. Replacing a failed disk in the Ceph cluster
      1. How to do it...
    9. Upgrading your Ceph cluster
      1. How to do it...
    10. Maintaining a Ceph cluster
      1. How to do it...
      2. How it works...
        1. Throttle the backfill and recovery:
  13. Ceph under the Hood
    1. Introduction
    2. Ceph scalability and high availability
    3. Understanding the CRUSH mechanism
    4. CRUSH map internals
      1. How to do it...
      2. How it works...
    5. CRUSH tunables
      1. The evolution of CRUSH tunables
        1. Argonaut – legacy
        2. Bobtail – CRUSH_TUNABLES2
        3. Firefly – CRUSH_TUNABLES3
        4. Hammer – CRUSH_V4
        5.  Jewel – CRUSH_TUNABLES5
      2. Ceph and kernel versions that support given tunables
      3. Warning when tunables are non-optimal
      4. A few important points
    6. Ceph cluster map
    7. High availability monitors
    8. Ceph authentication and authorization
      1. Ceph authentication
      2. Ceph authorization
      3. How to do it…
    9. I/O path from a Ceph client to a Ceph cluster
    10. Ceph Placement Group
      1. How to do it…
    11. Placement Group states
    12. Creating Ceph pools on specific OSDs
      1. How to do it...
  14. The Virtual Storage Manager for Ceph
    1. Introductionc 
    2. Understanding the VSM architecture
      1. The VSM controller
      2. The VSM agent
    3. Setting up the VSM environment
      1. How to do it...
    4. Getting ready for VSM
      1. How to do it...
    5. Installing VSM
      1. How to do it...
    6. Creating a Ceph cluster using VSM
      1. How to do it...
    7. Exploring the VSM dashboard
    8. Upgrading the Ceph cluster using VSM
    9. VSM roadmap
    10. VSM resources
  15. More on Ceph
    1. Introduction
    2. Disk performance baseline
      1. Single disk write performance
      2. How to do it...
      3. Multiple disk write performance
      4. How to do it...
      5. Single disk read performance
      6. How to do it...
      7. Multiple disk read performance
      8. How to do it...
      9. Results
    3. Baseline network performance
      1. How to do it...
    4. Ceph rados bench
      1. How to do it...
      2. How it works...
    5. RADOS load-gen
      1. How to do it...
      2. How it works...
      3. There's more...
    6. Benchmarking the Ceph Block Device
      1. How to do it...
      2. How it works...
    7. Benchmarking Ceph RBD using FIO
      1. How to do it...
    8. Ceph admin socket
      1. How to do it...
    9. Using the ceph tell command
      1. How to do it...
    10. Ceph REST API
      1. How to do it...
    11. Profiling Ceph memory
      1. How to do it...
    12. The ceph-objectstore-tool
      1. How to do it...
      2. How it works...
    13. Using ceph-medic
      1. How to do it...
      2. How it works...
      3. See also
    14. Deploying the experimental Ceph BlueStore
      1. How to do it...
      2. See Also
  16. Deploying Ceph
    1. Preparing your environment with Vagrant and VirtualBox
      1. System requirements
      2. Obtaining and installing VirtualBox
      3. Setting up Vagrant
      4. The ceph-deploy tool
    2. Orchestration
    3. Ansible
      1. Installing Ansible
      2. Creating your inventory file
      3. Variables
      4. Testing
    4. A very simple playbook
    5. Adding the Ceph Ansible modules
      1. Deploying a test cluster with Ansible
    6. Change and configuration management
    7. Summary
  17. BlueStore
    1. What is BlueStore?
    2. Why was it needed?
      1. Ceph's requirements
        1. Filestore limitations
      2. Why is BlueStore the solution?
    3. How BlueStore works
      1. RocksDB
      2. Deferred writes
      3. BlueFS
    4. How to use BlueStore
      1. Upgrading an OSD in your test cluster
    5. Summary
  18. Erasure Coding for Better Storage Efficiency
    1. What is erasure coding?
      1. K+M
    2. How does erasure coding work in Ceph?
    3. Algorithms and profiles
      1. Jerasure
      2. ISA
      3. LRC
      4. SHEC
    4. Where can I use erasure coding?
    5. Creating an erasure-coded pool
      1. Overwrites on erasure code pools with Kraken
      2. Demonstration
      3. Troubleshooting the 2147483647 error
        1. Reproducing the problem
    6. Summary
  19. Developing with Librados
    1. What is librados?
    2. How to use librados?
    3. Example librados application
      1. Example of the librados application with atomic operations
      2. Example of the librados application that uses watchers and notifiers
    4. Summary
  20. Distributed Computation with Ceph RADOS Classes
    1. Example applications and the benefits of using RADOS classes
    2. Writing a simple RADOS class in Lua
    3. Writing a RADOS class that simulates distributed computing
      1. Preparing the build environment
      2. RADOS class
      3. Client librados applications
        1. Calculating MD5 on the client
        2. Calculating MD5 on the OSD via RADOS class
      4. Testing
    4. RADOS class caveats
    5. Summary
  21. Tiering with Ceph
    1. Tiering versus caching
      1. How Cephs tiering functionality works
    2. What is a bloom filter
    3. Tiering modes
      1. Writeback
      2. Forward
        1. Read-forward
      3. Proxy
        1. Read-proxy
    4. Uses cases
    5. Creating tiers in Ceph
    6. Tuning tiering
      1. Flushing and eviction
        1. Promotions
    7. Promotion throttling
      1. Monitoring parameters
      2. Tiering with erasure-coded pools
      3. Alternative caching mechanisms
    8. Summary
  22. Troubleshooting
    1. Repairing inconsistent objects
    2. Full OSDs
    3. Ceph logging
    4. Slow performance
      1. Causes
        1. Increased client workload
        2. Down OSDs
        3. Recovery and backfilling
        4. Scrubbing
        5. Snaptrimming
        6. Hardware or driver issues
      2. Monitoring
        1. iostat
        2. htop
        3. atop
      3. Diagnostics
    5. Extremely slow performance or no IO
      1. Flapping OSDs
      2. Jumbo frames
      3. Failing disks
      4. Slow OSDs
    6. Investigating PGs in a down state
    7. Large monitor databases
    8. Summary
  23. Disaster Recovery
    1. What is a disaster?
    2. Avoiding data loss
    3. What can cause an outage or data loss?
    4. RBD mirroring
      1. The journal
      2. The rbd-mirror daemon
      3. Configuring RBD mirroring
      4. Performing RBD failover
    5. RBD recovery
    6. Lost objects and inactive PGs
    7. Recovering from a complete monitor failure
    8. Using the Cephs object store tool
    9. Investigating asserts
      1. Example assert
    10. Summary
  24. Operations and Maintenance
    1. Topology
      1. The 40,000 foot view
      2. Drilling down
        1. OSD dump
        2. OSD list
        3. OSD find
        4. CRUSH dump
      3. Pools
      4. Monitors
      5. CephFS
    2. Configuration
      1. Cluster naming and configuration
      2. The Ceph configuration file
      3. Admin sockets
      4. Injection
      5. Configuration management
    3. Scrubs
    4. Logs
      1. MON logs
      2. OSD logs
      3. Debug levels
    5. Common tasks
      1. Installation
        1. Ceph-deploy
      2. Flags
      3. Service management
        1. Systemd: the wave (tsunami?) of the future
        2. Upstart
        3. sysvinit
      4. Component failures
      5. Expansion
      6. Balancing
      7. Upgrades
    6. Working with remote hands
    7. Summary
  25. Monitoring Ceph
    1. Monitoring Ceph clusters
      1. Ceph cluster health
      2. Watching cluster events
      3. Utilizing your cluster
      4. OSD variance and fillage
      5. Cluster status
      6. Cluster authentication
    2. Monitoring Ceph MONs
      1. MON status
      2. MON quorum status
    3. Monitoring Ceph OSDs
      1. OSD tree lookup
      2. OSD statistics
      3. OSD CRUSH map
    4. Monitoring Ceph placement groups
      1. PG states
    5. Monitoring Ceph MDS
    6. Open source dashboards and tools
      1. Kraken
      2. Ceph-dash
      3. Decapod
      4. Rook
      5. Calamari
      6. Ceph-mgr
      7. Prometheus and Grafana
    7. Summary
  26. Performance and Stability Tuning
    1. Ceph performance overview
    2. Kernel settings
      1. pid_max
      2. kernel.threads-max, vm.max_map_count
      3. XFS filesystem settings
      4. Virtual memory settings
    3. Network settings
      1. Jumbo frames
      2. TCP and network core
      3. iptables and nf_conntrack
    4. Ceph settings
      1. max_open_files
      2. Recovery
      3. OSD and FileStore settings
      4. MON settings
    5. Client settings
    6. Benchmarking
      1. RADOS bench
      2. CBT
      3. FIO
        1. Fill volume, then random 1M writes for 96 hours, no read verification:
        2. Fill volume, then small block writes for 96 hours, no read verification:
        3. Fill volume, then 4k random writes for 96 hours, occasional read verification:
    7. Summary
  27. Other Books You May Enjoy
    1. Leave a review - let other readers know what you think