「Ceph」- 维护管理:Pool

创建存储池(增)

创建纠删码存储池

root@pc-amd64-100247:~# ceph osd dump | grep ensure
root@pc-amd64-100247:~# ceph osd erasure-code-profile set my-ec-testing rulesetfailure-domain=osd k=3 m=2
root@pc-amd64-100247:~# ceph osd erasure-code-profile ls
default
my-ec-testing
root@pc-amd64-100247:~# ceph osd pool create synology-nas-testing 16 16 erasure my-ec-testing
pool 'synology-nas-testing' created

删除存储池(删)

允许删除:

# ceph config set global mon_allow_pool_delete true
# ceph osd pool delete 'pool name' 'pool name' --yes-i-really-really-mean-it

// -------------------------------------------------------- // 或仅修改特定存储池,开启防止删除保护

# ceph osd pool get rbd nodelete
nodelete: false
# ceph osd pool set rbd nodelete 1
set pool 11 nodelete to 1

# ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must unset nodelete flag for the pool first

修改存储池(改)

修改 PG 数量

# ceph osd pool set cephfs_data pg_num 32
set pool 1 pg_num to 32

缩小副本数

[SOLVED] – change num-replicas on ceph rool online? | Proxmox Support Forum

ceph osd pool set POOL_NAME size 3
ceph osd pool set POOL_NAME min_size 2

// 缩小到 1 个副本

# ceph config set global mon_allow_pool_size_one true
# ceph osd pool set data_pool min_size 1
# ceph osd pool set data_pool size 1 --yes-i-really-mean-it

修改存储池的 CRUSH Rule

# ceph osd pool get rbd crush_rule
crush_rule: replicated_rule
# ceph osd map rbd testfile
osdmap e614 pool 'rbd' (11) object 'testfile' -> pg
11.551a2b36 (11.6) -> up ([1,3], p1) acting ([1,3,2], p1)

# ceph osd pool set rbd crush_rule ssd-first
set pool 11 crush_rule to ssd-first
# ceph osd map rbd testfile                                 // 当再次查看时,文件所属 PG 的 OSD Set 已发生变化
osdmap e625 pool 'rbd' (11) object 'testfile' -> pg
11.551a2b36 (11.36) -> up ([2,1,3], p2) acting ([2,1,3], p2)

查看存储池(查)

查看 Pool 列表

# ceph osd lspools 
1 device_health_metrics
14 vm-over-cephfs_data
15 vm-over-cephfs_metadata
16 benchmark-rados-bench
22 cop_synology-nas
26 cop_guest-storage

查看 Pool 的配置参数

# ceph osd dump | grep cop_guest-storage
pool 26 'cop_guest-storage' replicated size 2 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 11618 lfor 0/8490/8488 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd

// -------------------------------------------------------- // 查看存储池的个别参数

# ceph osd pool get cop_guest-storage pg_num
pg_num: 32

# ceph osd pool get cop_guest-storage pgp_num
pgp_num: 32

# ceph osd pool get cop_guest-storage nodelete
nodelete: false

查看 Pool 的统计信息

# ceph osd pool stats cop_guest-storage
pool cop_guest-storage id 26
  client io 53 KiB/s wr, 0 op/s rd, 8 op/s wr

常见问题处理

[Sol.] application not enabled on pool ‘xxx’

问题描述:在 ceph status 中,提示 application not enabled on pool ‘xxx’ 错误;

原因分析:ASSOCIATE POOL TO APPLICATION

解决方案:ceph osd pool application enable {pool-name} {application-name}

[Sol.] x pool(s) have non-power-of-two pg_num

修改 PG 数量(pg_num)即可;