"Ceph is a unified, distributed storage system designed for excellent performance, reliability and scalability." - https://ceph.com

Glossary

http://docs.ceph.com/docs/master/glossary/

  • Ceph OSD: The Ceph OSD software, which interacts with a logical disk (OSD).
  • CephFS: The POSIX filesystem components of Ceph.
  • MDS: (Ceph Metadata Server) The Ceph metadata software.
  • MGR: (Ceph Manager) The Ceph manager software, which collects all the state from the whole cluster in one place.
  • MON: (Ceph Monitor) The Ceph monitor software.
  • OSD: (Object Storage Device) A physical or logical storage unit.
  • RADOS: Reliable Autonomic Distributed Object Store.
  • RBD: The block storage component of Ceph.
  • RGW: The S3/Swift gateway component of Ceph.
  • PG: Placement Group. http://docs.ceph.com/docs/master/rados/operations/placement-groups/

Examples

Activate all OSDs

sudo ceph-disk activate-all

Starting with ceph 13, use:

ceph-volume lvm activate --all

Start all ceph services

sudo systemctl start ceph.target

Stop all ceph services

Unfortunately there's not a single service or target to stop, so you have to use globs

sudo systemctl stop '*ceph*'

Show the status of all osds in the cluster

ceph osd status

Or alternatively

ceph osd tree

Show metadata about all osds in the cluster

This produces a json list with a dict for each osd.

ceph osd metadata

Show all pools

ceph osd lspools

List all RBD images in a pool

pool_name="kube"
rbd list "$pool_name"

Show rbd usage stats

This will show name, provisioned, used, and will have a sum at the bottom, with sizes defaulting to human readable units. You can use --format json to get raw byte usage.

rbd disk-usage --pool $pool_name $optional_rbd_name

Map an RBD image to a system device

pool_name="kube"
rbd_image_name="testimage"
rbd map "$pool_name/$rbd_image_name"

Then you can mount whatever the resulting device is. -o X-mount.mkdir automatically creates the destination mount point, but may not be available on some systems.

mount -o X-mount.mkdir /dev/rbd8 /mnt/rbd8

List snapshots for an RBD image

pool_name="kube"
rbd_image_name="testimage"
rbd snap list "$pool_name/$rbd_image_name"

Copy a snapshot to an image so it can be mounted

pool_name="kube"
rbd_image_name="testimage"
snap_name="snap-072519-213210"
rbd clone "$pool_name/$rbd_image_name@$snap_name" "$pool_name/image-$snap_name"

After this you can map the new image and mount it as described above.

Monitor existing operations

ceph daemon "mon.$MON_HOSTNAME" ops

Links