ceph 對象網關多區部署

概念

realm

realm代表了全局唯一的命名空間,這個命名空間由一個或多個zonegroup組成, 必須指定一個 zonegroupmaster zonegroup

zonegroup

zonegroup包含一個或多個zone,必須指定一個zonemaster zonergw 多活方式是在同一zonegroup的多個zone之間進行的,即同一zonegroup中多個zone之間的數據完全一致,用戶可以通過任意zone讀寫用一份數據。但是,對元數據的操作,比如創建桶,創建用戶,仍然只能在master zone上進行。對數據的操作,比如創建桶中的對象,訪問對象等,可以在任意zone中處理。slave zone可以接受bucketuser操作請求,然後將請求重定向到master zone,如果master zonr出現故障,slave zone將會被提升爲master zone

zone

zone定義了由一個或多個ceph對象網關實例組成的邏輯組.每個zone由自身的ceph集羣支撐,在一個zonegroup中,多zone可以提供容災能力

period

period代表了每個zonegroup的狀態和zone的配置,每個realm都有與之對應的period(表示一個realm的有效期。每次對zonegroup或者zone做修改時,需要更新period並提交)

epoch

epoch相當於period的版本, 每個period包含一個獨有的IDepoch,每次提交操作都會使epoch遞增.

雙活站點架構圖

在這裏插入圖片描述

部署

環境準備

首先使用 ceph-deploy 部署兩個 ceph 集羣, 每個集羣上分別啓動了 3ceph rgw 實例

# 所有節點
vim /etc/hosts
10.0.0.11 ceph-1
10.0.0.12 ceph-2
10.0.0.13 ceph-3
10.0.0.14 ceph-4
10.0.0.15 ceph-5
10.0.0.16 ceph-6

注意: 以下各主機都能通過 ceph- 來訪問, 如果實際環境不能, 下面的 endpoints 配置請使用對應可以訪問的 ip*

第一個集羣由 ceph-1, ceph-2, ceph-3 組成

root@ceph-1:/opt/ceph-cluster# ceph -s
  cluster:
    id:     f240c0f6-a1d0-46f5-94e2-c02bf47af456
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph-2,ceph-1,ceph-3 (age 2h)
    mgr: ceph-1(active, since 117m), standbys: ceph-2, ceph-3
    osd: 3 osds: 3 up (since 118m), 3 in (since 118m)
    rgw: 3 daemons active (ceph-1, ceph-2, ceph-3)

  data:
    pools:   4 pools, 128 pgs
    objects: 219 objects, 1.2 KiB
    usage:   3.0 GiB used, 297 GiB / 300 GiB avail
    pgs:     128 active+clean

第二個集羣由ceph-4,ceph-5,ceph-6 組成

root@ceph-4:/opt/ceph-cluster# ceph -s
  cluster:
    id:     3e74f9b3-f098-4ae6-bc90-cf1f473b3fba
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph-6,ceph-4,ceph-5 (age 2h)
    mgr: ceph-5(active, since 119m), standbys: ceph-6, ceph-4
    osd: 3 osds: 3 up (since 119m), 3 in (since 119m)
    rgw: 3 daemons active (ceph-4, ceph-5, ceph-6)

  data:
    pools:   4 pools, 128 pgs
    objects: 219 objects, 1.2 KiB
    usage:   3.0 GiB used, 297 GiB / 300 GiB avail
    pgs:     128 active+clean

創建realm

# ceph-1 上操作
radosgw-admin realm create --rgw-realm=mye --default
{
    "id": "f7d5d1c3-ebbf-4bf4-86d6-98e4d9a666bd",
    "name": "mye",
    "current_period": "dbb84feb-b945-434c-a6cd-a5525e0bd472",
    "epoch": 1
}

創建 master zonegroup

ceph爲了向前兼容,會存在一個默認的zonegroup,需要手動先將其刪掉

# ceph-1 上操作
radosgw-admin zonegroup delete --rgw-zonegroup=default

創建master zonegroup

# ceph-1 上操作
radosgw-admin zonegroup create --rgw-zonegroup=wuhan --endpoints="http://ceph-1:7480,http://ceph-2:7480,http://ceph-3:7480" --master --default
{
    "id": "286fc3b8-a688-4114-87c9-fe9f8c1b4ec2",
    "name": "wuhan",
    "api_name": "wuhan",
    "is_master": "true",
    "endpoints": [
        "http://ceph-1:7480",
        "http://ceph-2:7480",
        "http://ceph-3:7480"
    ],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "",
    "zones": [],
    "placement_targets": [],
    "default_placement": "",
    "realm_id": "f7d5d1c3-ebbf-4bf4-86d6-98e4d9a666bd"
}

創建master zone

# ceph-1 上操作
# 生成隨機的 accesskey 和 secretkey, 之後用到的命令使用的 key 都是這兩個
SYSTEM_ACCESS_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1)
SYSTEM_SECRET_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 | head -n 1)
# 創建 master zone
radosgw-admin zone create --rgw-zonegroup=wuhan --rgw-zone=wuhan1 --endpoints="http://ceph-1:7480,http://ceph-2:7480,http://ceph-3:7480" --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEY --default --master
{
    "id": "54cc0049-ad29-4fe2-88a2-e0b0b9cf3a3f",
    "name": "wuhan1",
    "domain_root": "wuhan1.rgw.meta:root",
    "control_pool": "wuhan1.rgw.control",
    "gc_pool": "wuhan1.rgw.log:gc",
    "lc_pool": "wuhan1.rgw.log:lc",
    "log_pool": "wuhan1.rgw.log",
    "intent_log_pool": "wuhan1.rgw.log:intent",
    "usage_log_pool": "wuhan1.rgw.log:usage",
    "reshard_pool": "wuhan1.rgw.log:reshard",
    "user_keys_pool": "wuhan1.rgw.meta:users.keys",
    "user_email_pool": "wuhan1.rgw.meta:users.email",
    "user_swift_pool": "wuhan1.rgw.meta:users.swift",
    "user_uid_pool": "wuhan1.rgw.meta:users.uid",
    "otp_pool": "wuhan1.rgw.otp",
    "system_key": {
        "access_key": "0dU7afcy3SN5bjAr6ame",
        "secret_key": "G5DOC3wmmYD38SDgVtqm4ow2nObORvlKoWyGzFwj"
    },
    "placement_pools": [
        {
            "key": "default-placement",
            "val": {
                "index_pool": "wuhan1.rgw.buckets.index",
                "storage_classes": {
                    "STANDARD": {
                        "data_pool": "wuhan1.rgw.buckets.data"
                    }
                },
                "data_extra_pool": "wuhan1.rgw.buckets.non-ec",
                "index_type": 0
            }
        }
    ],
    "metadata_heap": "",
    "realm_id": "f7d5d1c3-ebbf-4bf4-86d6-98e4d9a666bd"
}
# 創建用於在 zone 間同步數據的用戶
radosgw-admin user create --uid=zone.user --display-name="Zone Synchronization User" --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEY --system
{
    "user_id": "zone.user",
    "display_name": "Zone Synchronization User",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "zone.user",
            "access_key": "0dU7afcy3SN5bjAr6ame",
            "secret_key": "G5DOC3wmmYD38SDgVtqm4ow2nObORvlKoWyGzFwj"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "system": "true",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

提交period

# ceph-1 上操作
radosgw-admin period update --commit
{
    "id": "28d97ad2-af4c-4d08-8572-a7b9a01c6be7",
    "epoch": 1,
    "predecessor_uuid": "dbb84feb-b945-434c-a6cd-a5525e0bd472",
    "sync_status": [],
    "period_map": {
        "id": "28d97ad2-af4c-4d08-8572-a7b9a01c6be7",
        "zonegroups": [
            {
                "id": "286fc3b8-a688-4114-87c9-fe9f8c1b4ec2",
                "name": "wuhan",
                "api_name": "wuhan",
                "is_master": "true",
                "endpoints": [
                    "http://ceph-1:7480",
                    "http://ceph-2:7480",
                    "http://ceph-3:7480"
                ],
                "hostnames": [],
                "hostnames_s3website": [],
                "master_zone": "54cc0049-ad29-4fe2-88a2-e0b0b9cf3a3f",
                "zones": [
                    {
                        "id": "54cc0049-ad29-4fe2-88a2-e0b0b9cf3a3f",
                        "name": "wuhan1",
                        "endpoints": [
                            "http://ceph-1:7480",
                            "http://ceph-2:7480",
                            "http://ceph-3:7480"
                        ],
                        "log_meta": "false",
                        "log_data": "false",
                        "bucket_index_max_shards": 0,
                        "read_only": "false",
                        "tier_type": "",
                        "sync_from_all": "true",
                        "sync_from": [],
                        "redirect_zone": ""
                    }
                ],
                "placement_targets": [
                    {
                        "name": "default-placement",
                        "tags": [],
                        "storage_classes": [
                            "STANDARD"
                        ]
                    }
                ],
                "default_placement": "default-placement",
                "realm_id": "f7d5d1c3-ebbf-4bf4-86d6-98e4d9a666bd"
            }
        ],
        "short_zone_ids": [
            {
                "key": "54cc0049-ad29-4fe2-88a2-e0b0b9cf3a3f",
                "val": 1541315104
            }
        ]
    },
    "master_zonegroup": "286fc3b8-a688-4114-87c9-fe9f8c1b4ec2",
    "master_zone": "54cc0049-ad29-4fe2-88a2-e0b0b9cf3a3f",
    "period_config": {
        "bucket_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        }
    },
    "realm_id": "f7d5d1c3-ebbf-4bf4-86d6-98e4d9a666bd",
    "realm_name": "mye",
    "realm_epoch": 2
}

修改 ceph 配置文件並重啓 rgw

# ceph-1, ceph-2, ceph-3 上操作
vim /etc/ceph/ceph.conf
[client.rgw.ceph-1]
rgw_frontends="civetweb port=7480"
rgw_zone=wuhan1
[client.rgw.ceph-2]
rgw_frontends="civetweb port=7480"
rgw_zone=wuhan1
[client.rgw.ceph-3]
rgw_frontends="civetweb port=7480"
rgw_zone=wuhan1
systemctl restart ceph-radosgw@rgw.`hostname -s`

在第二個 ceph 集羣上配置

# ceph-4 上操作
# 從master zone拉取realm
SYSTEM_ACCESS_KEY=0dU7afcy3SN5bjAr6ame
SYSTEM_SECRET_KEY=G5DOC3wmmYD38SDgVtqm4ow2nObORvlKoWyGzFwj
radosgw-admin realm pull --url=http://ceph-1:7480 --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEY
# 拉取period
radosgw-admin period pull --url=http://ceph-1:7480 --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEY
# 創建 slave zone
radosgw-admin zone create --rgw-zonegroup=wuhan --rgw-zone=wuhan2 --access-key=$SYSTEM_ACCESS_KEY --secret=$SYSTEM_SECRET_KEY --endpoints="http://ceph-4:7480,http://ceph-5:7480,http://ceph-6:7480" --default
# 提交 period, 如果提交出錯, 嘗試重啓 ceph-4,ceph-5,ceph-6 上的 rgw
radosgw-admin period update --commit --rgw-zone=wuhan2

# ceph-4, ceph-5, ceph-6 上操作
vim /etc/ceph/ceph.conf
[client.rgw.ceph-4]
rgw_frontends="civetweb port=7480"
rgw_zone=wuhan2
[client.rgw.ceph-5]
rgw_frontends="civetweb port=7480"
rgw_zone=wuhan2
[client.rgw.ceph-6]
rgw_frontends="civetweb port=7480"
rgw_zone=wuhan2
systemctl restart ceph-radosgw@rgw.`hostname -s`

# 在各節點上查看同步狀態
radosgw-admin sync status

驗證

ceph-1 上創建用戶

radosgw-admin user create --uid=test --display-name="test"
radosgw-admin caps add --uid=test --caps="users=read, write; usage=read,write;buckets=read, write;metadata=read,write"

ceph-4 上查看是否存在該用戶

radosgw-admin user list
[
    "zone.user",
    "test"
]
radosgw-admin user info --uid=test

存在, 表示數據成功從 master zone 同步到了 slave zone

經過測試, 只能在 master zone 上面創建用戶,不能在 slave zone 上創建用戶

可以在 master zoneslave zone 上創建 bucket,不管是在哪創建的, 都可以同步數據

管理命名

# 查看 zone
radosgw-admin zone list
# 刪除 zone
radosgw-admin zone delete --rgw-zone=wuhan1
# 查看 zonegroup
radosgw-admin zonegroup list
# 刪除 zonegroup
radosgw-admin zonegroup delete --rgw-zonegroup=wuhan
# 查看 realm
radosgw-admin realm list
# 刪除 realm
radosgw-admin realm delete --rgw-realm=mye
# 查看同步狀態
radosgw-admin sync status

參考

ceph-多區域radosgw網關配置

how-to-configure-rgw-multisite-in-ceph

Ceph Multisite

Ceph 官方

Ceph 雙活數據中心實現與最佳實踐

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章