「CEPH」- 集群的管理与运维

查看集群的运行状态

查看集群整体运行简况

# ceph -w 
# ceph -s
# watch -n 1 ceph -s

# ceph status
  cluster:
    id:     cf79beac-61eb-11ed-a2e0-080027d3c643
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-node-01,ceph-node-02,ceph-node-03 (age 22h)
    mgr: ceph-node-02.gktsek(active, since 22h), standbys: ceph-node-01.ymsncp
    mds: 1/1 daemons up, 2 standby
    osd: 6 osds: 6 up (since 22h), 6 in (since 22h)
 
  data:
    volumes: 1/1 healthy
    pools:   5 pools, 113 pgs
    objects: 67 objects, 107 MiB
    usage:   20 GiB used, 160 GiB / 180 GiB avail
    pgs:     113 active+clean

# ceph health
HEALTH_OK

// -------------------------------------------------------- // 如果集群存在问题,将显示相信信息

# ceph health detail
HEALTH_OK

查看存储空间使用情况

# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
ssd    3.5 TiB  2.0 TiB  1.5 TiB   1.5 TiB      43.86
TOTAL  3.5 TiB  2.0 TiB  1.5 TiB   1.5 TiB      43.86
 
--- POOLS ---
POOL                     ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
device_health_metrics     1    1  101 MiB       31  303 MiB   0.04    223 GiB
vm-over-cephfs_data      14   32      0 B        0      0 B      0    223 GiB
vm-over-cephfs_metadata  15   32  122 KiB       22  446 KiB      0    223 GiB
benchmark-rados-bench    16   32   43 GiB   17.14k   86 GiB  11.43    335 GiB
cop_synology-nas         22  512  716 GiB  233.24k  1.4 TiB  68.12    335 GiB
cop_guest-storage        26   32   16 GiB    4.16k   33 GiB   4.64    335 GiB