使用 KubeSphere 部署高可用 RocketMQ 集羣

作者:老Z,雲原生愛好者,目前專注於雲原生運維,KubeSphere Ambassador。

Spring Cloud Alibaba 全家桶之 RocketMQ 是一款典型的分佈式架構下的消息中間件產品,使用異步通信方式和發佈訂閱的消息傳輸模型。

很多基於 Spring Cloud 開發的項目都喜歡採用 RocketMQ 作爲消息中間件。

RocketMQ 常用的部署模式如下:

  • 單 Master 模式
  • 多 Master 無 Slave 模式
  • 多 Master 多 Slave 模式-異步複製
  • 多 Master 多 Slave 模式-同步雙寫

更多的部署方案詳細信息可以參考官方文檔

本文重點介紹 單 Master 模式和多 Master 多 Slave-異步複製模式在 K8s 集羣上的部署方案。

單 Master 模式

這種部署方式風險較大,僅部署一個 NameServer 和一個 Broker,一旦 Broker 重啓或者宕機時,會導致整個服務不可用,不建議線上生產環境使用,僅可以用於開發和測試環境。

部署方案參考官方rocketmq-docker項目中使用的容器化部署方案涉及的鏡像、啓動方式、定製化配置。

多 Master 多 Slave-異步複製模式

每個 Master 配置一個 Slave,有多對 Master-Slave,HA 採用異步複製方式,主備有短暫消息延遲(毫秒級),這種模式的優缺點如下:

  • 優點:即使磁盤損壞,消息丟失的非常少,且消息實時性不會受影響,同時 Master 宕機後,消費者仍然可以從 Slave 消費,而且此過程對應用透明,不需要人工干預,性能同多 Master 模式幾乎一樣;
  • 缺點:Master 宕機,磁盤損壞情況下會丟失少量消息。

多 Master 多 Slave-異步複製模式適用於生產環境,部署方案採用官方提供的 RocketMQ Operator

離線鏡像製作

此過程爲可選項,離線內網環境可用,如果不配置內網鏡像,後續的資源配置清單中注意容器的 image 參數請使用默認值。

本文分別介紹了單 Master 模式和多 Master 多 Slave-異步複製模式部署 RocketMQ 使用的離線鏡像的製作方式。

  • 單 Master 模式直接採用 RocketMQ 官方文檔中介紹的容器化部署方案中使用的鏡像。

  • 多 Master 多 Slave-異步複製模式的離線鏡像製作方式採用 RocketMQ Operator 官方自帶的鏡像製作工具製作打包,製作過程中很多包都需要到國外網絡下載,但是受限於國外網絡訪問,默認成功率較低,需要多次嘗試或採取特殊手段 ( 懂的都懂)。

    也可以用傳統的方式手工的 Pull Docker Hub 上已有的鏡像,然後再 Push 到私有鏡像倉庫。

在一臺能同時訪問互聯網和內網 Harbor 倉庫的服務器上進行下面的操作。

在 Harbor 中創建項目

本人習慣內網離線鏡像的命名空間跟應用鏡像默認的命名空間保持一致,因此,在 Harbor 中創建 apacheapacherocketmq 兩個項目,可以在 Harbor 管理界面中手工創建項目,也可以用下面命令行的方式自動化創建。

curl -u "admin:Harbor12345" -X POST -H "Content-Type: application/json" https://registry.zdevops.com.cn/api/v2.0/projects -d '{ "project_name": "apache", "public": true}'
curl -u "admin:Harbor12345" -X POST -H "Content-Type: application/json" https://registry.zdevops.com.cn/api/v2.0/projects -d '{ "project_name": "apacherocketmq", "public": true}'

安裝 Go 1.16

RocketMQ Operator 自定義鏡像製作需要用到 Go 環境,需要先安裝配置。

下載 Go 1.16 系列的最新版:

cd /opt/
wget https://golang.google.cn/dl/go1.16.15.linux-amd64.tar.gz

解壓源代碼到指定目錄:

tar zxvf go1.16.15.linux-amd64.tar.gz -C /usr/local/

配置環境變量:

cat >> /etc/profile.d/go.sh << EOF
# go environment
export GOROOT=/usr/local/go
export GOPATH=/srv/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
EOF

GOPATH 爲工作目錄也是代碼的存放目錄,可以根據自己的習慣配置

配置 Go:

go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct

驗證:

source /etc/profile.d/go.sh
go verison

獲取 RocketMQ Operator

從 Apache 官方 GitHub 倉庫獲取 rocketmq-operator 代碼。

cd /srv
git clone -b 0.3.0 https://github.com/apache/rocketmq-operator.git

製作 RocketMQ Operator Image

修改 DockerFile:

cd /srv/rocketmq-operator
vi Dockerfile

Notice: 構建鏡像的過程需訪問國外的軟件源和鏡像倉庫,在國內訪問有時會受限制,因此可以提前修改爲國內的軟件源和鏡像倉庫。

此操作爲可選項,如果訪問不受限則不需要配置。

必要的修改內容

# 第 10 行(修改代理地址爲國內地址,加速訪問)
# 修改前
RUN go mod download

# 修改後
RUN go env -w GOPROXY=https://goproxy.cn,direct && go mod download

# 第 25 行(修改源地址爲國內源)
# 修改前
RUN apk add --no-cache bash gettext nmap-ncat openssl busybox-extras

# 修改後
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apk/repositories && \
    apk add --no-cache bash gettext nmap-ncat openssl busybox-extras

可選的修改內容

# 默認安裝的 ROCKETMQ版本爲 4.9.4,可以修改爲指定版本
# 第 28 行,修改 4.9.4
ENV ROCKETMQ_VERSION 4.9.4

製作鏡像:

yum install gcc
cd /srv/rocketmq-operator
go mod tidy
IMAGE_URL=registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0
make docker-build IMG=${IMAGE_URL}

驗證鏡像構建成功:

docker images | grep rocketmq-operator

推送鏡像:

make docker-push IMG=${IMAGE_URL}

清理臨時鏡像

docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0

製作 RocketMQ Broker Image

修改 DockerFile(可選):

cd /srv/rocketmq-operator/images/broker/alpine
vi Dockerfile

此操作爲可選項,主要是爲了安裝軟件加速,如果訪問不受限則不需要配置。

# 第 20 行(修改源地址爲國內源)
# 修改前
RUN apk add --no-cache bash gettext nmap-ncat openssl busybox-extras

# 修改後
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apk/repositories && \
    apk add --no-cache bash gettext nmap-ncat openssl busybox-extras

修改鏡像構建腳本:

# 修改鏡像倉庫地址爲內網地址

sed -i 's#apacherocketmq#registry.zdevops.com.cn/apacherocketmq#g' build-broker-image.sh

構建並推送鏡像:

./build-broker-image.sh 4.9.4

驗證鏡像構建成功:

docker images | grep rocketmq-broker

清理臨時鏡像:

docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0

製作 RocketMQ Name Server Image

修改 DockerFile(可選):

cd /srv/rocketmq-operator/images/namesrv/alpine
vi Dockerfile

此操作爲可選項,主要是爲了安裝軟件加速,如果訪問不受限則不需要配置。

# 第 20 行(修改源地址爲國內源)
# 修改前
RUN apk add --no-cache bash gettext nmap-ncat openssl busybox-extras

# 修改後
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apk/repositories && \
    apk add --no-cache bash gettext nmap-ncat openssl busybox-extras

修改鏡像構建腳本:

# 修改鏡像倉庫地址爲內網地址

sed -i 's#apacherocketmq#registry.zdevops.com.cn/apacherocketmq#g' build-namesrv-image.sh

構建並推送鏡像:

./build-namesrv-image.sh 4.9.4

驗證鏡像構建成功:

docker images | grep rocketmq-nameserver

清理臨時鏡像:

docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0

根據官方已有鏡像製作離線鏡像

上面的 RocketMQ 多 Master 多 Slave-異步複製模式部署方案中用到的離線鏡像製作方案更適合於本地修改定製的場景,如果單純的只想把官方已有鏡像不做修改的下載並推送到本地倉庫,可以參考下面的方案。

下載鏡像:

docker pull apache/rocketmq-operator:0.3.0
docker pull apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0
docker pull apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0

Notice: 官方倉庫最新版的鏡像是 2 年前的 4.5.0.

重新打 tag:

docker tag apache/rocketmq-operator:0.3.0-snapshot registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0
docker tag apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0 registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0
docker tag apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0 registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0

推送到私有鏡像倉庫:

docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0
docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0
docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0

清理臨時鏡像:

docker rmi apache/rocketmq-operator:0.3.0
docker rmi apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0
docker rmi apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0
docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0
docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0
docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0

製作 RocketMQ Console Image

本文直接拉取官方鏡像作爲本地離線鏡像,如果需要修改內容並重構,可以參考 RocketMQ Console 使用的 官方 Dockerfile自行構建。

下載鏡像:

docker pull apacherocketmq/rocketmq-console:2.0.0

重新打 tag:

docker tag apacherocketmq/rocketmq-console:2.0.0 registry.zdevops.com.cn/apacherocketmq/rocketmq-console:2.0.0

推送到私有鏡像倉庫:

docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-console:2.0.0

清理臨時鏡像:

docker rmi apacherocketmq/rocketmq-console:2.0.0
docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-console:2.0.0

準備單 Master RocketMQ 部署方案涉及的離線鏡像

單 Master RocketMQ 部署方案涉及的鏡像跟集羣模式部署方案採用的 RocketMQ Operator 中使用的鏡像不同,在製作離線鏡像時,直接從官方鏡像庫拉取然後重新打 tag,再推送本地鏡像倉庫。

二者具體不同說明如下:

  • 單 Master 方案使用的是 Docker Hub 中 apache 命名空間下的鏡像,並且鏡像名稱不區分 nameserver 和 broker,RocketMQ Operator 使用的是 apacherocketmq 命名空間下的鏡像,鏡像名稱區分 nameserver 和 broker。

  • 單 Master 方案和 RocketMQ Operator 方案中管理工具使用的鏡像也不同,單 Master 方案使用的是 apacherocketmq 命名空間下的 rocketmq-dashboard 鏡像,RocketMQ Operator 使用的是 apacherocketmq 命名空間下的 rocketmq-console 鏡像。

具體的離線鏡像製作流程如下:

下載鏡像

docker pull apache/rocketmq:4.9.4
docker pull apacherocketmq/rocketmq-dashboard:1.0.0

重新打 tag

docker tag apache/rocketmq:4.9.4 registry.zdevops.com.cn/apache/rocketmq:4.9.4
docker tag apacherocketmq/rocketmq-dashboard:1.0.0 registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0

推送到私有鏡像倉庫

docker push registry.zdevops.com.cn/apache/rocketmq:4.9.4
docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0

清理臨時鏡像

docker rmi apache/rocketmq:4.9.4
docker rmi apacherocketmq/rocketmq-dashboard:1.0.0
docker rmi registry.zdevops.com.cn/apache/rocketmq:4.9.4
docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0

單 Master 模式部署

思路梳理

根據 RocketMQ 服務使用的組件,需要部署以下資源

  • Broker StatefulSet
  • NameServer StatefulSet
  • NameServer Cluster Service:內部服務
  • Dashboard Deployment
  • Dashboard External Service:Dashboard 外部管理用
  • ConfigMap:Broker 自定義配置文件

資源配置清單

參考 GitHub 中 Apache rocketmq-docker項目中介紹的容器化啓動示例配置,編寫適用於 K8S 的資源配置清單。

Notice: 每個人技術能力、技術習慣、服務環境有所不同,這裏介紹的只是我採用的一種簡單方式,並不一定是最優的方案,大家可以根據實際情況編寫適合自己的配置。

rocketmq-cm.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: rocketmq-broker-config
  namespace: zdevops
data:
  BROKER_MEM: ' -Xms2g -Xmx2g -Xmn1g '
  broker-common.conf: |-
    brokerClusterName = DefaultCluster
    brokerName = broker-0
    brokerId = 0
    deleteWhen = 04
    fileReservedTime = 48
    brokerRole = ASYNC_MASTER
    flushDiskType = ASYNC_FLUSH

rocketmq-name-service-sts.yaml

kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: rocketmq-name-service
  namespace: zdevops
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rocketmq-name-service
      name_service_cr: rocketmq-name-service
  template:
    metadata:
      labels:
        app: rocketmq-name-service
        name_service_cr: rocketmq-name-service
    spec:
      volumes:
        - name: host-time
          hostPath:
            path: /etc/localtime
            type: ''
      containers:
        - name: rocketmq-name-service
          image: 'registry.zdevops.com.cn/apache/rocketmq:4.9.4'
          command:
            - /bin/sh
          args:
            - mqnamesrv
          ports:
            - name: tcp-9876
              containerPort: 9876
              protocol: TCP
          resources:
            limits:
              cpu: 500m
              memory: 1Gi
            requests:
              cpu: 250m
              memory: 512Mi
          volumeMounts:
            - name: rocketmq-namesrv-storage
              mountPath: /home/rocketmq/logs
              subPath: logs
            - name: host-time
              readOnly: true
              mountPath: /etc/localtime
          imagePullPolicy: Always
  volumeClaimTemplates:
    - kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: rocketmq-namesrv-storage
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: glusterfs
        volumeMode: Filesystem
  serviceName: ''

---
kind: Service
apiVersion: v1
metadata:
  name: rocketmq-name-server-service
  namespace: zdevops
spec:
  ports:
    - name: tcp-9876
      protocol: TCP
      port: 9876
      targetPort: 9876
  selector:
    name_service_cr: rocketmq-name-service
  type: ClusterIP

rocketmq-broker-sts.yaml

kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: rocketmq-broker-0-master
  namespace: zdevops
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rocketmq-broker
      broker_cr: rocketmq-broker
  template:
    metadata:
      labels:
        app: rocketmq-broker
        broker_cr: rocketmq-broker
    spec:
      volumes:
        - name: rocketmq-broker-config
          configMap:
            name: rocketmq-broker-config
            items:
              - key: broker-common.conf
                path: broker-common.conf
            defaultMode: 420
        - name: host-time
          hostPath:
            path: /etc/localtime
            type: ''
      containers:
        - name: rocketmq-broker
          image: 'apache/rocketmq:4.9.4'
          command:
            - /bin/sh
          args:
            - mqbroker
            - "-c"
            - /home/rocketmq/conf/broker-common.conf
          ports:
            - name: tcp-vip-10909
              containerPort: 10909
              protocol: TCP
            - name: tcp-main-10911
              containerPort: 10911
              protocol: TCP
            - name: tcp-ha-10912
              containerPort: 10912
              protocol: TCP
          env:
            - name: NAMESRV_ADDR
              value: 'rocketmq-name-server-service.zdevops:9876'
            - name: BROKER_MEM
              valueFrom:
                configMapKeyRef:
                  name: rocketmq-broker-config
                  key: BROKER_MEM
          resources:
            limits:
              cpu: 500m
              memory: 12Gi
            requests:
              cpu: 250m
              memory: 2Gi
          volumeMounts:
            - name: host-time
              readOnly: true
              mountPath: /etc/localtime
            - name: rocketmq-broker-storage
              mountPath: /home/rocketmq/logs
              subPath: logs/broker-0-master
            - name: rocketmq-broker-storage
              mountPath: /home/rocketmq/store
              subPath: store/broker-0-master
            - name: rocketmq-broker-config
              mountPath: /home/rocketmq/conf/broker-common.conf
              subPath: broker-common.conf
          imagePullPolicy: Always
  volumeClaimTemplates:
    - kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: rocketmq-broker-storage
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 8Gi
        storageClassName: glusterfs
        volumeMode: Filesystem
  serviceName: ''

rocketmq-dashboard.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  name: rocketmq-dashboard
  namespace: zdevops
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rocketmq-dashboard
  template:
    metadata:
      labels:
        app: rocketmq-dashboard
    spec:
      containers:
        - name: rocketmq-dashboard
          image: 'registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0'
          ports:
            - name: http-8080
              containerPort: 8080
              protocol: TCP
          env:
            - name: JAVA_OPTS
              value: >-
                -Drocketmq.namesrv.addr=rocketmq-name-server-service.zdevops:9876
                -Dcom.rocketmq.sendMessageWithVIPChannel=false
          resources:
            limits:
              cpu: 500m
              memory: 2Gi
            requests:
              cpu: 50m
              memory: 512Mi
          imagePullPolicy: Always

---
kind: Service
apiVersion: v1
metadata:
  name: rocketmq-dashboard-service
  namespace: zdevops
spec:
  ports:
    - name: http-8080
      protocol: TCP
      port: 8080
      targetPort: 8080
      nodePort: 31080
  selector:
    app: rocketmq-dashboard
  type: NodePort

GitOps

本操作爲可選項,本人習慣在個人開發服務器上編輯或修改資源配置清單,然後提交到 Git 服務器 (Gitlab、Gitee、GitHub 等),然後在 k8s 節點上從 Git 服務器拉取資源配置清單並執行,從而實現資源配置清單的版本化管理,簡單的實現運維 GitOps。

本系列文檔的所有 k8s 資源配置清單文件,爲了演示和操作方便,都放在了統一的 k8s-yaml 倉庫中,實際工作中都是一個應用一個 Git 倉庫,更便於應用配置的版本控制。

大家在實際使用中可以忽略本步驟,直接在 k8s 節點上編寫資源配置清單並執行,也可以參考我的使用方式,實現簡單的 GitOps。

在個人運維開發服務器上操作

# 在已有代碼倉庫創建 rocketmq/single 目錄
mkdir -p rocketmq/single

# 編輯資源配置清單
vi rocketmq/single/rocketmq-cm.yaml
vi rocketmq/single/rocketmq-name-service-sts.yaml
vi rocketmq/single/rocketmq-broker-sts.yaml
vi rocketmq/single/rocketmq-dashboard.yaml

# 提交 Git
git add rocketmq
git commit -am '添加 rocketmq 單節點資源配置清單'
git push

部署資源

在 k8s 集羣 Master 節點上或是獨立的運維管理服務器上操作

更新鏡像倉庫代碼

cd /srv/k8s-yaml
git pull

部署資源 (分步式,二選一)

測試環境使用分步單獨部署的方式,以便測試資源配置清單的準確性。

cd /srv/k8s-yaml
kubectl apply -f rocketmq/single/rocketmq-cm.yaml
kubectl apply -f rocketmq/single/rocketmq-name-service-sts.yaml
kubectl apply -f rocketmq/single/rocketmq-broker-sts.yaml
kubectl apply -f rocketmq/single/rocketmq-dashboard.yaml

部署資源 (一鍵式,二選一)

實際使用中,可以直接 apply 整個目錄,實現一鍵式自動部署,在正式研發和生產環境中使用目錄的方式實現快速部署。

kubectl apply -f rocketmq/single/

驗證

ConfigMap:

$ kubectl get cm -n zdevops

NAME                     DATA   AGE
kube-root-ca.crt         1      17d
rocketmq-broker-config   2      22s

StatefulSet:

$ kubectl get sts -o wide -n zdevops

NAME                       READY   AGE   CONTAINERS              IMAGES
rocketmq-broker-0-master   1/1     11s   rocketmq-broker         registry.zdevops.com.cn/apache/rocketmq:4.9.4
rocketmq-name-service      1/1     12s   rocketmq-name-service   registry.zdevops.com.cn/apache/rocketmq:4.9.4

Deployment:

$ kubectl get deploy -o wide -n zdevops

NAME                 READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS           IMAGES                                                            SELECTOR
rocketmq-dashboard   1/1     1            1           31s   rocketmq-dashboard   registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0   app=rocketmq-dashboard

Pods:

$ kubectl get pods -o wide -n zdevops

NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE              NOMINATED NODE   READINESS GATES
rocketmq-broker-0-master-0           1/1     Running   0          77s   10.233.116.103   ks-k8s-master-2   <none>           <none>
rocketmq-dashboard-b5dbb9d88-cwhqc   1/1     Running   0          3s    10.233.87.115    ks-k8s-master-1   <none>           <none>
rocketmq-name-service-0              1/1     Running   0          78s   10.233.116.102   ks-k8s-master-2   <none>           <none>

Service:

$ kubectl get svc -o wide -n zdevops

NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE     SELECTOR
rocketmq-dashboard-service     NodePort    10.233.5.237    <none>        8080:31080/TCP   74s     app=rocketmq-dashboard
rocketmq-name-server-service   ClusterIP   10.233.3.61     <none>        9876/TCP         2m29s   name_service_cr=rocketmq-name-service

通過瀏覽器打開 K8S 集羣中任意節點的 IP:31080,可以看到 RocketMQ 控制檯的管理界面。

清理資源

卸載 RocketMQ 或是安裝失敗需要清理後重新安裝,可以在 K8S 集羣上使用下面的流程清理資源。

清理 StatefulSet:

kubectl delete sts rocketmq-broker-0-master -n zdevops
kubectl delete sts rocketmq-name-service -n zdevops

清理 Deployment:

kubectl delete deployments rocketmq-dashboard -n zdevops

清理 ConfigMap:

kubectl delete cm rocketmq-broker-config -n zdevops

清理服務:

kubectl delete svc rocketmq-name-server-service -n zdevops 
kubectl delete svc rocketmq-dashboard-service -n zdevops 

清理存儲卷:

kubectl delete pvc rocketmq-namesrv-storage-rocketmq-name-service-0 -n zdevops
kubectl delete pvc rocketmq-broker-storage-rocketmq-broker-0-master-0 -n zdevops

當然,也可以利用資源配置清單清理資源,更簡單快捷 (存儲卷無法自動清理,需要手工清理)。

$ kubectl delete -f rocketmq/single/

statefulset.apps "rocketmq-broker-0-master" deleted
configmap "rocketmq-broker-config" deleted
deployment.apps "rocketmq-dashboard" deleted
service "rocketmq-dashboard-service" deleted
statefulset.apps "rocketmq-name-service" deleted
service "rocketmq-name-server-service" deleted

多 Master 多 Slave-異步複製模式部署

思路梳理

多 Master 多 Slave-異步複製模式的 RocketMQ 部署,使用官方提供的 RocketMQ Operator,部署起來比較快速便捷,擴容也比較方便。

默認配置會部署 1 個 Master 和 1 個對應的 Slave,部署完成後可以根據需求擴容 Master 和 Slave。

獲取 RocketMQ Operator

# git 獲取代碼時指定版本
cd /srv 
git clone -b 0.3.0 https://github.com/apache/rocketmq-operator.git

準備資源配置清單

本文演示的資源配置清單都是直接修改 rocketmq-operator 默認的配置,生產環境應根據默認配置修改一套適合自己環境的標準配置文件,並存放於 git 倉庫中。

爲 deploy 資源配置清單文件增加或修改命名空間:

cd /srv/rocketmq-operator
sed -i 'N;8 a \  namespace: zdevops' deploy/crds/rocketmq.apache.org_brokers.yaml
sed -i 'N;8 a \  namespace: zdevops' deploy/crds/rocketmq.apache.org_consoles.yaml
sed -i 'N;8 a \  namespace: zdevops' deploy/crds/rocketmq.apache.org_nameservices.yaml
sed -i 'N;8 a \  namespace: zdevops' deploy/crds/rocketmq.apache.org_topictransfers.yaml
sed -i 'N;18 a \  namespace: zdevops' deploy/operator.yaml
sed -i 'N;18 a \  namespace: zdevops' deploy/role_binding.yaml
sed -i 's/namespace: default/namespace: zdevops/g' deploy/role_binding.yaml
sed -i 'N;18 a \  namespace: zdevops' deploy/service_account.yaml
sed -i 'N;20 a \  namespace: zdevops' deploy/role.yaml

切記此步驟只能執行一次,如果失敗了則需要刪掉後重新執行。

執行完成後一定要查看一下結果是否符合預期 grep -r zdevops deploy/*

修改 example 資源配置清單文件中的命名空間:

sed -i 's/namespace: default/namespace: zdevops/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml
sed -i 'N;18 a \  namespace: zdevops' example/rocketmq_v1alpha1_cluster_service.yaml

修改鏡像地址爲內網地址:

sed -i 's#apache/rocketmq-operator:0.3.0#registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0#g' deploy/operator.yaml
sed -i 's#apacherocketmq#registry.zdevops.com.cn/apacherocketmq#g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml 

修改 RocketMQ 版本 (可選):

sed -i 's/4.5.0/4.9.4/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml 

Notice: 默認的資源配置清單示例中部署 RocketMQ 集羣的版本爲 4.5.0, 實際使用時請根據需求調整。

修改 NameService 網絡模式 (可選):

sed -i 's/hostNetwork: true/hostNetwork: false/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml 
sed -i 's/dnsPolicy: ClusterFirstWithHostNet/dnsPolicy: ClusterFirst/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml

Notice: 官方示例默認配置使用 hostNetwork 模式 , 適用於同時給 K8S 集羣內、外應用提供服務 , 實際使用時請根據需求調整 .

個人傾向於禁用 hostNetwork 模式 , 不跟外部應用混用 . 如果需要混用 , 則傾向於在外部獨立部署 RocketMQ。

修改 storageClassName 爲 glusterfs:

sed -i 's/storageClassName: rocketmq-storage/storageClassName: glusterfs/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml
sed -i 's/storageMode: EmptyDir/storageMode: StorageClass/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml

Notice: 演示環境 GlusterFS 存儲對應的 storageClassName 爲 glusterfs,請根據實際情況修改。

修改 nameServers 爲域名的形式:

sed -i 's/nameServers: ""/nameServers: "name-server-service.zdevops:9876"/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml

Notice: name-server-service.zdevops 是 NameServer service 名稱 + 項目名稱的組合

默認配置採用 pod [ip:port] 的形式 , 一旦 Pod IP 發生變化 ,Console 就沒法管理集羣了 , 且 Console 不會自動變更配置,如果設置爲空的話可能還會出現隨便配置的情況,因此一定要提前修改。

修改 RocketMQ Console 外部訪問的 NodePort:

sed -i 's/nodePort: 30000/nodePort: 31080/g' example/rocketmq_v1alpha1_cluster_service.yaml

Notice: 官方示例默認配置端口號爲 30000, 實際使用時請根據需求調整。

修改 RocketMQ NameServer 和 Console 的 service 配置:

sed -i '32,46s/^#//g' example/rocketmq_v1alpha1_cluster_service.yaml
sed -i 's/nodePort: 30001/nodePort: 31081/g' example/rocketmq_v1alpha1_cluster_service.yaml
sed -i 's/namespace: default/namespace: zdevops/g' example/rocketmq_v1alpha1_cluster_service.yaml

NameServer 默認使用了 NodePort 的形式,單純在 K8S 集羣內部使用的話,可以修改爲集羣模式。

GitOps

生產環境實際使用時建議將上面編輯整理後的資源配置清單,單獨整理,刪除 rocketmq-operator 項目中多餘的文件,行成一套適合於自己業務需要的資源配置清單,並使用 Git 進行版本控制。

單 Master 模式部署方案中已經詳細介紹過操作流程,此處不再多做介紹。

4.5. 部署 RocketMQ Operator (自動)

官方介紹的自動部署方法,適用於能連接互聯網的環境,部署過程中需要下載 controller-gen 和 kustomize 二進制文件,同時會下載一堆 go 依賴。

不適合於內網離線環境,這裏只是簡單介紹,本文重點採用後面的手動部署的方案。

部署 RocketMQ Operator:

make deploy

部署 RocketMQ Operator (手動)

部署 RocketMQ Operator:

kubectl create -f deploy/crds/rocketmq.apache.org_brokers.yaml
kubectl create -f deploy/crds/rocketmq.apache.org_nameservices.yaml
kubectl create -f deploy/crds/rocketmq.apache.org_consoles.yaml
kubectl create -f deploy/crds/rocketmq.apache.org_topictransfers.yaml
kubectl create -f deploy/service_account.yaml
kubectl create -f deploy/role.yaml
kubectl create -f deploy/role_binding.yaml
kubectl create -f deploy/operator.yaml

驗證 CRDS:

$ kubectl get crd | grep rocketmq.apache.org

brokers.rocketmq.apache.org                           2022-11-09T02:54:52Z
consoles.rocketmq.apache.org                          2022-11-09T02:54:54Z
nameservices.rocketmq.apache.org                      2022-11-09T02:54:53Z
topictransfers.rocketmq.apache.org                    2022-11-09T02:54:54Z

驗證 RocketMQ Operator:

$ kubectl get deploy -n zdevops -o wide

NAME                READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                                                           SELECTOR
rocketmq-operator   1/1     1            1           6m46s   manager      registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0   name=rocketmq-operator

$ kubectl get pods -n zdevops -o wide

NAME                                 READY   STATUS    RESTARTS   AGE     IP              NODE              NOMINATED NODE   READINESS GATES
rocketmq-operator-7cc6b48796-htpk8   1/1     Running   0          2m28s   10.233.116.70   ks-k8s-master-2   <none>           <none>

部署 RocketMQ 集羣

創建服務:

$ kubectl apply -f example/rocketmq_v1alpha1_cluster_service.yaml

service/console-service created
service/name-server-service created

創建集羣:

$ kubectl apply -f example/rocketmq_v1alpha1_rocketmq_cluster.yaml

configmap/broker-config created
broker.rocketmq.apache.org/broker created
nameservice.rocketmq.apache.org/name-service created
console.rocketmq.apache.org/console created

驗證

StatefulSet:

$ kubectl get sts -o wide -n zdevops

NAME                 READY   AGE   CONTAINERS     IMAGES
broker-0-master      1/1     27s   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0
broker-0-replica-1   1/1     27s   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0
name-service         1/1     27s   name-service   registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0

Deployment:

$ kubectl get deploy -o wide -n zdevops

NAME                READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                                                           SELECTOR
console             1/1     1            1           52s     console      registry.zdevops.com.cn/apacherocketmq/rocketmq-console:2.0.0    app=rocketmq-console
rocketmq-operator   1/1     1            1           4h43m   manager      registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0   name=rocketmq-operator

Pod:

$ kubectl get pods -o wide -n zdevops

NAME                                 READY   STATUS    RESTARTS      AGE     IP              NODE              NOMINATED NODE   READINESS GATES
broker-0-master-0                    1/1     Running   0             47s     10.233.87.24    ks-k8s-master-1   <none>           <none>
broker-0-replica-1-0                 1/1     Running   0             17s     10.233.117.28   ks-k8s-master-0   <none>           <none>
console-8d685798f-5pwct              1/1     Running   0             116s    10.233.116.84   ks-k8s-master-2   <none>           <none>
name-service-0                       1/1     Running   0             96s     10.233.116.85   ks-k8s-master-2   <none>           <none>
rocketmq-operator-7cc6b48796-htpk8   1/1     Running   2 (98s ago)   4h39m   10.233.116.70   ks-k8s-master-2   <none>           <none>

Services:

$ kubectl get svc -o wide -n zdevops

NAME                                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE   SELECTOR
console-service                                          NodePort    10.233.38.15    <none>        8080:31080/TCP   21m   app=rocketmq-console
name-server-service                                      NodePort    10.233.56.238   <none>        9876:31081/TCP   21m   name_service_cr=name-service   

通過瀏覽器打開 K8S 集羣中任意節點的 IP:31080,可以看到 RocketMQ 控制檯的管理界面。

清理資源

清理 RocketMQ Cluster

部署集羣失敗或是需要重新部署時,採用下面的順序清理刪除。

kubectl delete -f example/rocketmq_v1alpha1_rocketmq_cluster.yaml
kubectl delete -f example/rocketmq_v1alpha1_cluster_service.yaml

清理 RocketMQ Operator

kubectl delete -f deploy/crds/rocketmq.apache.org_brokers.yaml
kubectl delete -f deploy/crds/rocketmq.apache.org_nameservices.yaml
kubectl delete -f deploy/crds/rocketmq.apache.org_consoles.yaml
kubectl delete -f deploy/crds/rocketmq.apache.org_topictransfers.yaml
kubectl delete -f deploy/service_account.yaml
kubectl delete -f deploy/role.yaml
kubectl delete -f deploy/role_binding.yaml
kubectl delete -f deploy/operator.yaml

清理存儲卷

需要手工查找 Broker 和 NameServer 相關的存儲卷並刪除。

# 查找存儲卷
$ kubectl get pvc -n zdevops

NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
broker-storage-broker-0-master-0      Bound    pvc-6a78b573-d72a-47ca-9012-5bc888dfcb0f   8Gi        RWO            glusterfs      3m54s
broker-storage-broker-0-replica-1-0   Bound    pvc-4f096942-505d-4e34-ac7f-b871b9f33df3   8Gi        RWO            glusterfs      3m54s
namesrv-storage-name-service-0        Bound    pvc-2c45a77e-3ca1-4eab-bb57-8374aa9068d3   1Gi        RWO            glusterfs      3m54s

# 刪除存儲卷
kubectl delete pvc namesrv-storage-name-service-0 -n zdevops
kubectl delete pvc broker-storage-broker-0-master-0 -n zdevops
kubectl delete pvc broker-storage-broker-0-replica-1-0 -n zdevops

擴容 NameServer

如果當前的 name service 集羣規模不能滿足您的需求,您可以簡單地使用 RocketMQ-Operator 來擴大或縮小 name service 集羣的規模。

擴容 name service 需要編寫並執行獨立的資源配置清單,參考官方示例Name Server Cluster Scale,並結合自己實際環境的 rocketmq-operator 配置修改。

Notice: 不要在已部署的資源中直接修改副本數,直接修改不會生效,會被 Operator 幹掉。

編輯擴容 NameServer 資源配置清單 , rocketmq_v1alpha1_nameservice_cr.yaml

apiVersion: rocketmq.apache.org/v1alpha1
kind: NameService
metadata:
  name: name-service
  namespace: zdevops
spec:
  size: 2
  nameServiceImage: registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0
  imagePullPolicy: Always
  hostNetwork: false
  dnsPolicy: ClusterFirst
  resources:
    requests:
      memory: "512Mi"
      cpu: "250m"
    limits:
      memory: "1024Mi"
      cpu: "500m"
  storageMode: StorageClass
  hostPath: /data/rocketmq/nameserver
  volumeClaimTemplates:
    - metadata:
        name: namesrv-storage
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: glusterfs
        resources:
          requests:
            storage: 1Gi

執行擴容操作:

kubectl apply -f rocketmq/cluster/rocketmq_v1alpha1_nameservice_cr.yaml

驗證 StatefulSet:

$ kubectl get sts name-service -o wide -n zdevops

NAME           READY   AGE   CONTAINERS     IMAGES
name-service   2/2     16m   name-service   registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0

驗證 Pods:

$ kubectl get pods -o wide -n zdevops

NAME                                 READY   STATUS    RESTARTS   AGE    IP               NODE              NOMINATED NODE   READINESS GATES
broker-0-master-0                    1/1     Running   0          18m    10.233.87.117    ks-k8s-master-1   <none>           <none>
broker-0-replica-1-0                 1/1     Running   0          43s    10.233.117.99    ks-k8s-master-0   <none>           <none>
console-8d685798f-hnmvg              1/1     Running   0          18m    10.233.116.113   ks-k8s-master-2   <none>           <none>
name-service-0                       1/1     Running   0          18m    10.233.116.114   ks-k8s-master-2   <none>           <none>
name-service-1                       1/1     Running   0          110s   10.233.87.120    ks-k8s-master-1   <none>           <none>
rocketmq-operator-6db8ccc685-5hkk8   1/1     Running   0          18m    10.233.116.112   ks-k8s-master-2   <none>           <none>

特別說明

NameServer 擴容一定要慎重,在實際驗證測試中發現 NameServer 擴容會導致重建已有的除了 Broker-0 的 Master 之外的其他 Broker 的 Master 和 所有的 Slave。按官方文檔上的說明,應該是 Operator 通知所有的 Broker 更新 name service list parameters,以便它們可以註冊到新的 NameServer Service。

同時,在 allowRestart: true 策略下,Broker 將逐漸更新,因此更新過程也不會被生產者和消費者客戶端感知,也就是說理論上不會影響業務(未實際測試)。

但是,所有 Broker 的 Master 和 Slave 重建後,查看集羣狀態時,集羣節點的信息不穩定,有的時候能看到 3 個節點,有的時候則能看到 4 個節點。

因此,生產環境最好在初次部署的時候就配置 NameServer 的副本數爲 2 或是 3,儘量不要在後期擴容,除非你能搞定擴容造成的一切後果。

擴容 Broker

通常情況下,隨着業務的發展,現有的 Broker 集羣規模可能不再滿足您的業務需求。你可以簡單地使用 RocketMQ-Operator 來升級、擴容 Broker 集羣。

擴容 Broker 需要編寫並執行獨立的資源配置清單,參考官方示例Broker Cluster Scale,並結合自己實際環境的 rocketmq-operator 配置修改。

編輯擴容 Broker 資源配置清單 , rocketmq_v1alpha1_broker_cr.yaml

apiVersion: rocketmq.apache.org/v1alpha1
kind: Broker
metadata:
  name: broker
  namespace: zdevops
spec:
  size: 2
  nameServers: "name-server-service.zdevops::9876"
  replicaPerGroup: 1
  brokerImage: registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0
  imagePullPolicy: Always
  resources:
    requests:
      memory: "2048Mi"
      cpu: "250m"
    limits:
      memory: "12288Mi"
      cpu: "500m"
  allowRestart: true
  storageMode: StorageClass
  hostPath: /data/rocketmq/broker
  # scalePodName is [Broker name]-[broker group number]-master-0
  scalePodName: broker-0-master-0
  env:
    - name: BROKER_MEM
      valueFrom:
        configMapKeyRef:
          name: broker-config
          key: BROKER_MEM
  volumes:
    - name: broker-config
      configMap:
        name: broker-config
        items:
          - key: broker-common.conf
            path: broker-common.conf
  volumeClaimTemplates:
    - metadata:
        name: broker-storage
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: glusterfs
        resources:
          requests:
            storage: 8Gi

Notice: 注意重點字段 scalePodName: broker-0-master-0。

選擇源 Broker pod,將從其中將主題和訂閱信息數據等舊元數據傳輸到新創建的 Broker。

執行擴容 Broker:

kubectl apply -f rocketmq/cluster/rocketmq_v1alpha1_broker_cr.yaml

Notice: 執行成功後將部署一個新的 Broker Pod 組,同時 Operator 將在啓動新 Broker 之前將源 Broker Pod 中的元數據複製到新創建的 Broker Pod 中,因此新 Broker 將重新加載已有的主題和訂閱信息。

驗證 StatefulSet:

$ kubectl get sts  -o wide -n zdevops
NAME                 READY   AGE   CONTAINERS     IMAGES
broker-0-master      1/1     43m   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0
broker-0-replica-1   1/1     43m   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0
broker-1-master      1/1     27s   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0
broker-1-replica-1   1/1     27s   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0
name-service         2/2     43m   name-service   registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0

驗證 Pods:

$ kubectl get pods -o wide -n zdevops

NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE              NOMINATED NODE   READINESS GATES
broker-0-master-0                    1/1     Running   0          44m   10.233.87.117    ks-k8s-master-1   <none>           <none>
broker-0-replica-1-0                 1/1     Running   0          26m   10.233.117.99    ks-k8s-master-0   <none>           <none>
broker-1-master-0                    1/1     Running   0          72s   10.233.116.117   ks-k8s-master-2   <none>           <none>
broker-1-replica-1-0                 1/1     Running   0          72s   10.233.117.100   ks-k8s-master-0   <none>           <none>
console-8d685798f-hnmvg              1/1     Running   0          44m   10.233.116.113   ks-k8s-master-2   <none>           <none>
name-service-0                       1/1     Running   0          44m   10.233.116.114   ks-k8s-master-2   <none>           <none>
name-service-1                       1/1     Running   0          27m   10.233.87.120    ks-k8s-master-1   <none>           <none>
rocketmq-operator-6db8ccc685-5hkk8   1/1     Running   0          44m   10.233.116.112   ks-k8s-master-2   <none>           <none>

在 KubeSphere 控制檯驗證:

在 RocketMQ 控制檯驗證:

常見問題

沒裝 gcc 編譯工具

報錯信息:

[root@zdevops-master rocketmq-operator]# make docker-build IMG=${IMAGE_URL}
/data/k8s-yaml/rocketmq-operator/bin/controller-gen rbac:roleName=rocketmq-operator crd:generateEmbeddedObjectMeta=true webhook paths="./..." output:dir=deploy output:crd:artifacts:config=deploy/crds
head -n 14 deploy/role_binding.yaml > deploy/role.yaml.bak
cat deploy/role.yaml >> deploy/role.yaml.bak
rm deploy/role.yaml && mv deploy/role.yaml.bak deploy/role.yaml
/data/k8s-yaml/rocketmq-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
/usr/local/go/src/net/cgo_linux.go:12:8: no such package located
Error: not all generators ran successfully
run `controller-gen object:headerFile=hack/boilerplate.go.txt paths=./... -w` to see all available markers, or `controller-gen object:headerFile=hack/boilerplate.go.txt paths=./... -h` for usage
make: *** [generate] Error 1

解決方案:

$ yum install gcc

go mod 錯誤

報錯信息:

# 執行 make docker-build IMG=${IMAGE_URL} 報錯

go: creating new go.mod: module tmp
Downloading sigs.k8s.io/controller-tools/cmd/[email protected]
go get: added sigs.k8s.io/controller-tools v0.7.0
/data/build/rocketmq-operator/bin/controller-gen rbac:roleName=rocketmq-operator crd:generateEmbeddedObjectMeta=true webhook paths="./..." output:dir=deploy output:crd:artifacts:config=deploy/crds
Error: err: exit status 1: stderr: go: github.com/google/[email protected]: missing go.sum entry; to add it:
        go mod download github.com/google/uuid

Usage:
  controller-gen [flags]
  
......

output rules (optionally as output:<generator>:...)

+output:artifacts[:code=<string>],config=<string>  package  outputs artifacts to different locations, depending on whether they're package-associated or not.   
+output:dir=<string>                               package  outputs each artifact to the given directory, regardless of if it's package-associated or not.      
+output:none                                       package  skips outputting anything.                                                                          
+output:stdout                                     package  outputs everything to standard-out, with no separation.                                             

run `controller-gen rbac:roleName=rocketmq-operator crd:generateEmbeddedObjectMeta=true webhook paths=./... output:dir=deploy output:crd:artifacts:config=deploy/crds -w` to see all available markers, or `controller-gen rbac:roleName=rocketmq-operator crd:generateEmbeddedObjectMeta=true webhook paths=./... output:dir=deploy output:crd:artifacts:config=deploy/crds -h` for usage
make: *** [manifests] Error 1

解決方案:

go mod tidy

結束語

本文只是初步介紹了 RocketMQ 在 K8s 平臺上的單 Master 節點和多 Master 多 Slave-異步複製模式部署的方法,屬於入門級。

在生產環境中還需要根據實際環境優化配置,例如調整集羣的 Broker 數量、Master 和 Slave 的分配、性能調優、配置優化等。

本文由博客一文多發平臺 OpenWrite 發佈!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章