问题描述
我们这里将学习多站复制第二种场景的配置方法,以及相关问题的解决办法;
解决方案
# 11/22/2022 该文档已废弃。我们在学习《 Ceph Cookbook 中文版,ISBN 978-7-121-29016-9》 时,按照书中内容整理该笔记,该书针对 Ceph 10 版本,其通过 radowgw-agent 实现复制。而当前最新为 Ceph 17 版本,对象存储的复制的实现也发生变化(Multisite Sync Policy),并且 GitHub/radowgw-agent 也已经归档。鉴于我们暂时不使用 Ceph 的多区域复制,所以暂时搁置相关技术的验证测试;
环境概述
Master Region: US; with Two Zone:
1)us-east; master zone; rgw instance: us-east-1;
2)us-west; secondary zone; rgw instance: us-west-1;
----------------------- ----------------------- | Ceph RGW: us-east-1 | -- sync -- | Ceph RGW: us-west-1 | ----------------------- ----------------------- | | ---------------------------------------------------------- | Ceph Cluster | ----------------------------------------------------------
Ceph Cluster: ceph-node-01 172.31.252.201; ceph-node-02 172.31.252.202; ceph-node-03 172.31.252.203;
Ceph RGW: us-east-1 172.31.252.207; us-west-1 172.31.252.208;
Ceph Client: ubuntu-developing 172.31.252.100;
第一步、配置 Ceph 集群
创建 OSD 实例,用于存储对象存储的数据
这些存储池用于存储有关对象存储数据的一些关键信息
for ceph-node-01
ceph osd pool create .us.east.rgw.root 32 32 ceph osd pool create .us.east.rgw.control 32 32 ceph osd pool create .us.east.rgw.gc 32 32 ceph osd pool create .us.east.rgw.buckets 32 32 ceph osd pool create .us.east.rgw.buckets.index 32 32 ceph osd pool create .us.east.rgw.buckets.extra 32 32 ceph osd pool create .us.east.1og 32 32 ceph osd pool create .us.east.intent-log 32 32 ceph osd pool create .us.east.usage 32 32 ceph osd pool create .us.east.users 32 32 ceph osd pool create .us.east.users.email 32 32 ceph osd pool create .us.east.users.swift 32 32 ceph osd pool create .us.east.users.uid 32 32 ceph osd pool create .us.west.rgw.root 32 32 ceph osd pool create .us.west.rgw.control 32 32 ceph osd pool create .us.west.rgw.gc 32 32 ceph osd pool create .us.west.rgw.buckets 32 32 ceph osd pool create .us.west.rgw.buckets.index 32 32 ceph osd pool create .us.west.rgw.buckets.extra 32 32 ceph osd pool create .us.west.1og 32 32 ceph osd pool create .us.west.intent-log 32 32 ceph osd pool create .us.west.usage 32 32 ceph osd pool create .us.west.users 32 32 ceph osd pool create .us.west.users.email 32 32 ceph osd pool create .us.west.users.swift 32 32 ceph osd pool create .us.west.users.uid 32 32 ceph osd lspools
创建 Keyring 用于 RGW 访问 Ceph 集群
for ceph-node-01
// 创建 Keyring 信息 ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring ceph-authtool /etc/ceph/ceph.client.radosgw.keyring \ -n client.radosgw.us-east-1 --gen-key \ --cap osd 'allow rwx' --cap mon 'allow rwx' ceph-authtool /etc/ceph/ceph.client.radosgw.keyring \ -n client.radosgw.us-west-1 --gen-key \ --cap osd 'allow rwx' --cap mon 'allow rwx' // 添加 Keyring 到集群 ceph -i /etc/ceph/ceph.client.radosgw.keyring \ auth add client.radosgw.us-east-1 ceph -i /etc/ceph/ceph.client.radosgw.keyring \ auth add client.radosgw.us-west-1 // 分配 Keyring 给节点 scp /etc/ceph/ceph.client.radosgw.keyring 172.31.252.207:/etc/ceph/ scp /etc/ceph/ceph.client.radosgw.keyring 172.31.252.208:/etc/ceph/
第二步、配置 RGW 节点
创建 RGW 实例
on ceph-node-01
cat >> /etc/ceph/ceph.conf <<EOF [client.radosgw.us-east-1] host = us-east-1 rgw region = us rgw region root pool = .us.rgw.root rgw zone = us-east rgw zone root pool = .us-east.rgw.root keyring = /etc/ceph/ceph.client.radosgw.keyring rgw socket path = /var/run/ceph/client.radosgw.us-east-1.sock log file = /var/log/ceph/client.radosgw.us-east-1.log rgw dns name = ceph-rgw-01 [client.radosgw.us-west-1] host = us-west-1 rgw region = us rgw region root pool = .us.rgw.root rgw zone = us-west rgw zone root pool = .us-west.rgw.root keyring = /etc/ceph/ceph.client.radosgw.keyring rgw socket path = /var/run/ceph/client.radosgw.us-west-1.sock log file = /var/log/ceph/client.radosgw.us-west-1.log rgw dns name = ceph-rgw-01 EOF # 复制 ceph.conf 配置 scp /etc/ceph/ceph.conf 172.31.252.207:/etc/ceph/ scp /etc/ceph/ceph.conf 172.31.252.208:/etc/ceph/
for us-east-1 and us-west-1
cephadm install ceph-common cephadm install radosgw cephadm install radosgw-agent
创建 Region 信息
通过 us.json 来创建 Region:
radosgw-admin region set --infile ./us.json --name client.radosgw.us-east-1 # 删除默认 Region(如果存在) rados -p .us.rgw.root rm region_info.default --name client.radosgw.us-east-1 # 设置 us Region 为默认 radosgw-admin region default --rgw-region=us --name client.radosgw.us-east-1 # update Region map radosgw-admin regionmap update --name client.radosgw.us-east-1
创建 Zone 信息
us-east.json
us-east-1# radowgw-admin zone set --rgw-zone=us-east --infile us-east.json --name client.radosgw.us-east-1 us-east-1# radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name client.radosgw.us-west-1
us-weat.json
us-east-1# radowgw-admin zone set --rgw-zone=us-west --infile us-west.json --name client.radosgw.us-east-1 us-east-1# radosgw-admin zone set --rgw-zone=us-west --infile us-west.json --name client.radosgw.us-west-1
删除默认 Zone:
rados -p .rgw.root rm zone_info.default --name client.radowgw.us-east-1 radosgw-admin regionmap update --name client.radowgw.us-east-1
创建 RGW User 用以访问 RGW
on use-east-1
# us-east on use-east-1 radosgw-admin user create --uid="us-east" --display-name="US East" --name client.radosgw.us-east-1 \ --access_key="" --secret="" --system # us-west on us-west-1 radosgw-admin user create --uid="us-west" --display-name="US West" --name client.radosgw.us-west-1 \ --access_key="" --secret="" --system # us-east on use-west-1 radosgw-admin user create --uid="us-east" --display-name="US East" --name client.radosgw.us-west-1 \ --access_key="" --secret="" --system # us-west on us-east-1 radosgw-admin user create --uid="us-west" --display-name="US West" --name client.radosgw.us-east-1 \ --access_key="" --secret="" --system
关于 Access-Key / Secret 参数:
1)这里的 us-west 用户的 Access_Key 与之前的保持一直;
2)同理,us-east 用户的 Access-Key/Secret 也要与之前保持一致;
启动 RGW 服务
systemctl start radosgw@radosgw.us-west-1.service systemctl start radosgw@radosgw.us-east-1.service
on us-east-1
radosgw-admin regions 1List --name client.radosgw.us-east-1 radosgw-admin regions 1List --name client.radosgw.us-west-1 radosgw-admin Zone list --name Client.radosgw.us-east-1 radosgw-admin zone 1ist --name client.radosgw.us-west-1 curl http://us-east-1.cephcookbook.com:7480 curl http://us-west-1.cephcookbook.com:7480
第三步、开启 Agent 同步
cluster-data-sync.conf
SLC_zone: US-east source: http://us-east-1l.cephcookbook.com:7480 SC_access_key: XNK0ST8NXTMNZGN29NF9 SC_secret_key: 7VUJm8uAp71xKQZkjoP2ZmHu4sACA1SY8jTjay9dP5 dest_zone: Us-west destination: http://us-west-1l.cephcookbook.com:7480 dest_access_key: ARAK0ST8NXTMNZGN29NF9 dest_secret_key: ARAJm8uaAP71xKO2ZkjoP2ZzmHu4sACRA1SY8JjTjay9dP5 1og_ file: /var/1og/radosgw/radosgw-sync-us-east-west.1og
radosgw-agent -c cluster-data-sync.conf
验证同步复制
create two subusers on us-east
# radosgw-admin subuser create --uid="us-east"” --Subuser="useast :Swift" --access=ful1l --name Client.radosgw.uUs-east-1 --keytype Swift --Secret="7V Jm8uApP71xKQOZkjoP2ZmHu4sACA1SY8JjTjay9dqP57" # radosgw-adqmin subuser Create --uidq="us-east" --Subuser="useast:Swift" =--access=full --name Client.radqosgw.us-west-1 --keytype Swift --Secret= "7VUm8uAp71xKQ2Zk]joPZmHu4sSACRA1SY8]jTJjay9dP5"
create two subusers on us-west
# radosgw=-admin Subuser create --uid="us-west" --Subuser= "UsS-west:Swift" --access=ful1l --name client.radosgw.us-east-1 --key-typPe Swift --Secret= "RARAJm8uAp71xKQZkjoPZmHu4sACR1SY8jTjay9dP5" # radosgw-admin Subuser create --uid="us-west" --Subuser= "us-west:Swift" --access=ful1l --narme Client.radosgw.us-west-1 --key-type Swift --Secret= "RARAUm8uAp71xKQOZKJjJoP2ZmHu4sRACR1SY8jTJjay9dP5"
upload from us-east-1
# export ST_AUTH="http://us-east-1.cephcookbook.com: 7480/auth/1.0" # export ST_KEY=7VUJm8uAP71xKQ2kjoPZmHu4sACRA1SY8JjTJjay9dqP5 # expPort ST_USER=us-east:Swift 5. 从 us-east-1l 节点上列出和创建一些对象: # Swift 1ist # Swift Upload container-1l us.json # Swift list # Swift 1ist container-1
check on us-west-1
# export ST_AUTH="http://us-west-1.cephcookbook.com:7480/auth/1.0" # export ST_KEY=7VUJm8uAP71xKQ2kjoPZmHu4sACRA1SY8JjTJjay9dqP5 # export ST_USER=us-east:Swift # Swift 1ist