ceph 对象网关多区部署

概念

realm

realm代表了全局唯一的命名空间,这个命名空间由一个或多个zonegroup组成, 必须指定一个 zonegroupmaster zonegroup

zonegroup

zonegroup包含一个或多个zone,必须指定一个zonemaster zonergw 多活方式是在同一zonegroup的多个zone之间进行的,即同一zonegroup中多个zone之间的数据完全一致,用户可以通过任意zone读写用一份数据。但是,对元数据的操作,比如创建桶,创建用户,仍然只能在master zone上进行。对数据的操作,比如创建桶中的对象,访问对象等,可以在任意zone中处理。slave zone可以接受bucketuser操作请求,然后将请求重定向到master zone,如果master zonr出现故障,slave zone将会被提升为master zone

zone

zone定义了由一个或多个ceph对象网关实例组成的逻辑组.每个zone由自身的ceph集群支撑,在一个zonegroup中,多zone可以提供容灾能力

period

period代表了每个zonegroup的状态和zone的配置,每个realm都有与之对应的period(表示一个realm的有效期。每次对zonegroup或者zone做修改时,需要更新period并提交)

epoch

epoch相当于period的版本, 每个period包含一个独有的IDepoch,每次提交操作都会使epoch递增.

双活站点架构图

在这里插入图片描述

部署

环境准备

首先使用 ceph-deploy 部署两个 ceph 集群, 每个集群上分别启动了 3ceph rgw 实例

# 所有节点
vim /etc/hosts
10.0.0.11 ceph-1
10.0.0.12 ceph-2
10.0.0.13 ceph-3
10.0.0.14 ceph-4
10.0.0.15 ceph-5
10.0.0.16 ceph-6

注意: 以下各主机都能通过 ceph- 来访问, 如果实际环境不能, 下面的 endpoints 配置请使用对应可以访问的 ip*

第一个集群由 ceph-1, ceph-2, ceph-3 组成

root@ceph-1:/opt/ceph-cluster# ceph -s
  cluster:
    id:     f240c0f6-a1d0-46f5-94e2-c02bf47af456
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph-2,ceph-1,ceph-3 (age 2h)
    mgr: ceph-1(active, since 117m), standbys: ceph-2, ceph-3
    osd: 3 osds: 3 up (since 118m), 3 in (since 118m)
    rgw: 3 daemons active (ceph-1, ceph-2, ceph-3)

  data:
    pools:   4 pools, 128 pgs
    objects: 219 objects, 1.2 KiB
    usage:   3.0 GiB used, 297 GiB / 300 GiB avail
    pgs:     128 active+clean

第二个集群由ceph-4,ceph-5,ceph-6 组成

root@ceph-4:/opt/ceph-cluster# ceph -s
  cluster:
    id:     3e74f9b3-f098-4ae6-bc90-cf1f473b3fba
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph-6,ceph-4,ceph-5 (age 2h)
    mgr: ceph-5(active, since 119m), standbys: ceph-6, ceph-4
    osd: 3 osds: 3 up (since 119m), 3 in (since 119m)
    rgw: 3 daemons active (ceph-4, ceph-5, ceph-6)

  data:
    pools:   4 pools, 128 pgs
    objects: 219 objects, 1.2 KiB
    usage:   3.0 GiB used, 297 GiB / 300 GiB avail
    pgs:     128 active+clean

创建realm

# ceph-1 上操作
radosgw-admin realm create --rgw-realm=mye --default
{
    "id": "f7d5d1c3-ebbf-4bf4-86d6-98e4d9a666bd",
    "name": "mye",
    "current_period": "dbb84feb-b945-434c-a6cd-a5525e0bd472",
    "epoch": 1
}

创建 master zonegroup

ceph为了向前兼容,会存在一个默认的zonegroup,需要手动先将其删掉

# ceph-1 上操作
radosgw-admin zonegroup delete --rgw-zonegroup=default

创建master zonegroup

# ceph-1 上操作
radosgw-admin zonegroup create --rgw-zonegroup=wuhan --endpoints="http://ceph-1:7480,http://ceph-2:7480,http://ceph-3:7480" --master --default
{
    "id": "286fc3b8-a688-4114-87c9-fe9f8c1b4ec2",
    "name": "wuhan",
    "api_name": "wuhan",
    "is_master": "true",
    "endpoints": [
        "http://ceph-1:7480",
        "http://ceph-2:7480",
        "http://ceph-3:7480"
    ],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "",
    "zones": [],
    "placement_targets": [],
    "default_placement": "",
    "realm_id": "f7d5d1c3-ebbf-4bf4-86d6-98e4d9a666bd"
}

创建master zone

# ceph-1 上操作
# 生成随机的 accesskey 和 secretkey, 之后用到的命令使用的 key 都是这两个
SYSTEM_ACCESS_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1)
SYSTEM_SECRET_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 | head -n 1)
# 创建 master zone
radosgw-admin zone create --rgw-zonegroup=wuhan --rgw-zone=wuhan1 --endpoints="http://ceph-1:7480,http://ceph-2:7480,http://ceph-3:7480" --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEY --default --master
{
    "id": "54cc0049-ad29-4fe2-88a2-e0b0b9cf3a3f",
    "name": "wuhan1",
    "domain_root": "wuhan1.rgw.meta:root",
    "control_pool": "wuhan1.rgw.control",
    "gc_pool": "wuhan1.rgw.log:gc",
    "lc_pool": "wuhan1.rgw.log:lc",
    "log_pool": "wuhan1.rgw.log",
    "intent_log_pool": "wuhan1.rgw.log:intent",
    "usage_log_pool": "wuhan1.rgw.log:usage",
    "reshard_pool": "wuhan1.rgw.log:reshard",
    "user_keys_pool": "wuhan1.rgw.meta:users.keys",
    "user_email_pool": "wuhan1.rgw.meta:users.email",
    "user_swift_pool": "wuhan1.rgw.meta:users.swift",
    "user_uid_pool": "wuhan1.rgw.meta:users.uid",
    "otp_pool": "wuhan1.rgw.otp",
    "system_key": {
        "access_key": "0dU7afcy3SN5bjAr6ame",
        "secret_key": "G5DOC3wmmYD38SDgVtqm4ow2nObORvlKoWyGzFwj"
    },
    "placement_pools": [
        {
            "key": "default-placement",
            "val": {
                "index_pool": "wuhan1.rgw.buckets.index",
                "storage_classes": {
                    "STANDARD": {
                        "data_pool": "wuhan1.rgw.buckets.data"
                    }
                },
                "data_extra_pool": "wuhan1.rgw.buckets.non-ec",
                "index_type": 0
            }
        }
    ],
    "metadata_heap": "",
    "realm_id": "f7d5d1c3-ebbf-4bf4-86d6-98e4d9a666bd"
}
# 创建用于在 zone 间同步数据的用户
radosgw-admin user create --uid=zone.user --display-name="Zone Synchronization User" --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEY --system
{
    "user_id": "zone.user",
    "display_name": "Zone Synchronization User",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "zone.user",
            "access_key": "0dU7afcy3SN5bjAr6ame",
            "secret_key": "G5DOC3wmmYD38SDgVtqm4ow2nObORvlKoWyGzFwj"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "system": "true",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

提交period

# ceph-1 上操作
radosgw-admin period update --commit
{
    "id": "28d97ad2-af4c-4d08-8572-a7b9a01c6be7",
    "epoch": 1,
    "predecessor_uuid": "dbb84feb-b945-434c-a6cd-a5525e0bd472",
    "sync_status": [],
    "period_map": {
        "id": "28d97ad2-af4c-4d08-8572-a7b9a01c6be7",
        "zonegroups": [
            {
                "id": "286fc3b8-a688-4114-87c9-fe9f8c1b4ec2",
                "name": "wuhan",
                "api_name": "wuhan",
                "is_master": "true",
                "endpoints": [
                    "http://ceph-1:7480",
                    "http://ceph-2:7480",
                    "http://ceph-3:7480"
                ],
                "hostnames": [],
                "hostnames_s3website": [],
                "master_zone": "54cc0049-ad29-4fe2-88a2-e0b0b9cf3a3f",
                "zones": [
                    {
                        "id": "54cc0049-ad29-4fe2-88a2-e0b0b9cf3a3f",
                        "name": "wuhan1",
                        "endpoints": [
                            "http://ceph-1:7480",
                            "http://ceph-2:7480",
                            "http://ceph-3:7480"
                        ],
                        "log_meta": "false",
                        "log_data": "false",
                        "bucket_index_max_shards": 0,
                        "read_only": "false",
                        "tier_type": "",
                        "sync_from_all": "true",
                        "sync_from": [],
                        "redirect_zone": ""
                    }
                ],
                "placement_targets": [
                    {
                        "name": "default-placement",
                        "tags": [],
                        "storage_classes": [
                            "STANDARD"
                        ]
                    }
                ],
                "default_placement": "default-placement",
                "realm_id": "f7d5d1c3-ebbf-4bf4-86d6-98e4d9a666bd"
            }
        ],
        "short_zone_ids": [
            {
                "key": "54cc0049-ad29-4fe2-88a2-e0b0b9cf3a3f",
                "val": 1541315104
            }
        ]
    },
    "master_zonegroup": "286fc3b8-a688-4114-87c9-fe9f8c1b4ec2",
    "master_zone": "54cc0049-ad29-4fe2-88a2-e0b0b9cf3a3f",
    "period_config": {
        "bucket_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        }
    },
    "realm_id": "f7d5d1c3-ebbf-4bf4-86d6-98e4d9a666bd",
    "realm_name": "mye",
    "realm_epoch": 2
}

修改 ceph 配置文件并重启 rgw

# ceph-1, ceph-2, ceph-3 上操作
vim /etc/ceph/ceph.conf
[client.rgw.ceph-1]
rgw_frontends="civetweb port=7480"
rgw_zone=wuhan1
[client.rgw.ceph-2]
rgw_frontends="civetweb port=7480"
rgw_zone=wuhan1
[client.rgw.ceph-3]
rgw_frontends="civetweb port=7480"
rgw_zone=wuhan1
systemctl restart ceph-radosgw@rgw.`hostname -s`

在第二个 ceph 集群上配置

# ceph-4 上操作
# 从master zone拉取realm
SYSTEM_ACCESS_KEY=0dU7afcy3SN5bjAr6ame
SYSTEM_SECRET_KEY=G5DOC3wmmYD38SDgVtqm4ow2nObORvlKoWyGzFwj
radosgw-admin realm pull --url=http://ceph-1:7480 --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEY
# 拉取period
radosgw-admin period pull --url=http://ceph-1:7480 --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEY
# 创建 slave zone
radosgw-admin zone create --rgw-zonegroup=wuhan --rgw-zone=wuhan2 --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEY --endpoints="http://ceph-4:7480,http://ceph-5:7480,http://ceph-6:7480" --default
# 提交 period, 如果提交出错, 尝试重启 ceph-4,ceph-5,ceph-6 上的 rgw
radosgw-admin period update --commit --rgw-zone=wuhan2

# ceph-4, ceph-5, ceph-6 上操作
vim /etc/ceph/ceph.conf
[client.rgw.ceph-4]
rgw_frontends="civetweb port=7480"
rgw_zone=wuhan2
[client.rgw.ceph-5]
rgw_frontends="civetweb port=7480"
rgw_zone=wuhan2
[client.rgw.ceph-6]
rgw_frontends="civetweb port=7480"
rgw_zone=wuhan2
systemctl restart ceph-radosgw@rgw.`hostname -s`

# 在各节点上查看同步状态
radosgw-admin sync status

验证

ceph-1 上创建用户

radosgw-admin user create --uid=test --display-name="test"
radosgw-admin caps add --uid=test --caps="users=read, write; usage=read,write;buckets=read, write;metadata=read,write"

ceph-4 上查看是否存在该用户

radosgw-admin user list
[
    "zone.user",
    "test"
]
radosgw-admin user info --uid=test

存在, 表示数据成功从 master zone 同步到了 slave zone

经过测试, 只能在 master zone 上面创建用户,不能在 slave zone 上创建用户

可以在 master zoneslave zone 上创建 bucket,不管是在哪创建的, 都可以同步数据

管理命名

# 查看 zone
radosgw-admin zone list
# 删除 zone
radosgw-admin zone delete --rgw-zone=wuhan1
# 查看 zonegroup
radosgw-admin zonegroup list
# 删除 zonegroup
radosgw-admin zonegroup delete --rgw-zonegroup=wuhan
# 查看 realm
radosgw-admin realm list
# 删除 realm
radosgw-admin realm delete --rgw-realm=mye
# 查看同步状态
radosgw-admin sync status

参考

ceph-多区域radosgw网关配置

how-to-configure-rgw-multisite-in-ceph

Ceph Multisite

Ceph 官方

Ceph 双活数据中心实现与最佳实践

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章