ceph 集群的两种部署方法

Ceph ansible 部署步骤:

    1.  git clone -b stable-3.0 https://github.com/ceph/ceph-ansible.git ; cd ceph-ansible

    2. 生成 inventory
cat > inventory <<EOF
[mons]
ceph1
ceph2
ceph3

[mgrs]
ceph1
ceph2
ceph3

[osds]
ceph1
ceph2
ceph3
EOF
    3. mv site.yml.sample site.yml

    4. cd group_vars; 
cat > all.yml <<EOF
---
ceph_origin: repository
ceph_repository: community
ceph_stable_release: luminous
public_network: "172.17.0.0/20"
cluster_network: "{{ public_network }}"
monitor_interface: eth0
devices:
  - '/dev/vdb'
osd_scenario: collocated
EOF

    5. ansible-playbook -i inventory site.yml.sample

    6.  启动mgr 内置的dashboard:
        ceph mgr module enable  dashboard

ceph 在k8s上的使用步骤:

    doc:
        https://docs.openshift.org/3.6/install_config/storage_examples/ceph_rbd_dynamic_example.html

    1.  在ceph 集群部署中 创建用于在k8s中,存储数据用的pool:
        ceph osd pool create kube 128

    注:
        # 128 为 pg 数 的计算公式一般为

        若少于5个OSD, 设置pg_num为128。
        5~10个OSD,设置pg_num为512。
        10~50个OSD,设置pg_num为4096。
        超过50个OSD,可以参考计算公式。

        http://docs.ceph.com/docs/mimic/rados/operations/placement-groups/

        pg数 = osd 数量 * 100 / pool 复制份数 / pool 数量

        # 查看 pool 复制份数, 既 ceph.conf 里设置的 osd_pool_default_size
           ceph osd dump |grep size|grep rbd

        # 当 osd  pool复制数  pool 数量 变更时,应该重新计算并变更 pg 数
        # 变更 pg_num 的时候 应该将 pgp_num 的数量一起变更,否则无法报错
           ceph osd pool set rbd pg_num 256
           ceph osd pool set rbd pgp_num 256

    2.  创建secret 认证:
        1.  创建一个client.kube用户:
            ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring

        2.  将用户client.kube 的keyring 转换成base64的格式:
            ceph auth get-key client.kube | base64
            (生成:QVFCRE1aRmFhdWdFQWhBQUI5a21HbUVXRTgwQ2xJSWFJTVphTUE9PQ==)

        3.  创建ceph-secret.yaml:
            cat > ceph-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
    name: ceph-secret
    namespace: kube-system
type: kubernetes.io/rbd
data:
    key: QVFCRE1aRmFhdWdFQWhBQUI5a21HbUVXRTgwQ2xJSWFJTVphTUE9PQ==

    3.  创建rbd-provisioner
    4.  在k8s中创建ceph的 Storage Class:
        1. 方式一:作为k8s的默认 Storage Class:
            cat > ceph-storage.yaml <<EOF
—-
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: default
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  labels:
    kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/rbd
parameters:
    # ceph 集群中的monitor节点IP:PORT
    monitors: 172.17.0.49:6789,172.17.0.44:6789,172.17.0.28:6789
    # 指定可以在ceph pool 中创建image的用户
    adminId: kube
    # adminId的 secret name
    adminSecretName: ceph-secret
    adminSecretNamespace: kube-system
    # 存储池
    pool: kube
    # 用于映射rbd image的用户,默认同adminId相同
    userId: kube
    # userId使用的 secret,并且必须与pvc在同一命名空间
    userSecretName: ceph-secret-user

        2. 方式二:作为k8s的非默认 Storage Class:
            cat > ceph-storage.yaml <<EOF
—
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ceph-storageclass
provisioner: kubernetes.io/rbd
parameters:
  monitors: 172.17.0.49:6789,172.17.0.44:6789,172.17.0.28:6789
  adminId: kube
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-secret-user
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"

EOF

创建用于存放MSP的image:

    rbd create kube/fabric —size 512 —image-feature=layering

将 kube/fabric 映射出来:

    rbd map kube/fabric

在ceph集群中,创建cephfs:

    ceph osd pool create cephfs_data 64

    ceph osd pool create cephfs_metadata 64

    ceph fs new fabric_fs cephfs_metadata cephfs_data

禁用/开启ceph dashboard:

    ceph mgr module disable dashboard 

    ceph mgr module enable dashboard

Ceph cluster 安装步骤2 :

    1.  mkdir myceph; cd myceph

    2. ceph-deploy new node1 node2 …..

    3. cat >> ceph.conf <<EOF
public network = 172.17.0.0/20
mon allow pool delete = true
mon_max_pg_per_osd = 300
mgr initial modules = dashboard prometheus

osd pool default size = 1
osd pool default min size = 1

mon_clock_drift_allowed = 2
mon_clock_drift_warn_backoff = 30
EOF

    4. ceph-deploy install  --repo-url https://registry.umarkcloud.com:8080/repository/yum-ceph-proxy/ --release=luminous node1 node2 …

        ceph-deploy install --release=luminous  --nogpgcheck --no-adjust-repos  k8s7 k8s8 k8s9

            ceph-deploy install --repo-url https://mirrors.aliyun.com/ceph/rpm-mimic/el7 --gpg-url https://mirrors.aliyun.com/ceph/keys/release.asc ceph1 ceph2 ceph3

    5. ceph-deploy mon create-initial

    6. ceph-deploy admin k8s7 k8s8 k8s9

    7. ceph-deploy mgr  create k8s7 k8s8 k8s9

    8. ceph-deploy osd create --data /dev/vdb(或 vg/lv) node1
        ceph-deploy osd create --data /dev/vdb node2
        ceph-deploy osd create --data /dev/vdb node3

    9. ceph-deploy  mds create k8s7 k8s8

    10. ceph osd pool create kube 128; ceph osd pool application enable kube rbd

    11. ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring

    12. ceph osd pool create cephfs_data 64

         ceph osd pool create cephfs_metadata 64

        ceph fs new data cephfs_metadata cephfs_data

    13. ceph auth get-key client.admin | base64

    14. ceph auth get-key client.kube | base64 

    15. helm install -n cephpr —namespace ceph .

    16. 安装 rgw 以支持s3 api 访问:
        a. 安装软件:
            ceph-deploy install --rgw umark-poweredge-r540
        b. 部署服务:
            ceph-deploy rgw create umark-poweredge-r540
        c. 修改ceph.conf 配置文件:
            [client.rgw.umark-poweredge-r540]
            rgw_frontends = "civetweb port=80"
        d.  重启服务:
             sudo systemctl restart ceph-radosgw.target

        e.  创建用户:
            sudo radosgw-admin user create --uid='testuser' --display-name='First User'

- [ ] Spec:
  volumeClaimTemplates:
  - metadata:
      name: orderer-home
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "ceph-rbd"
      resources:
        requests:
          storage: 2Gi
  - metadata:
      name: orderer-block
    spec:
      accessModes: [ "ReadWriteMany" ]
      storageClassName: "cephfs"
      resources:
        requests:
          storage: 100Mi

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    name: {{ .Values.global.fabricMspPvcName }}
    namespace: {{ .Release.Namespace }}
    annotations:
      "helm.sh/created": {{.Release.Time.Seconds | quote }}
      "helm.sh/hook": pre-install
      "helm.sh/resource-policy": keep
spec:
    accessModes:
      - ReadOnlyMany
    resources:
      requests:
        storage: 1Gi
{{- if eq .Values.storage.type "gluster" }}
    volumeName: {{ .Values.persistence.fabricMspPvName }}
{{- else if eq .Values.storage.type "ceph" }}
    storageClassName: {{ .Values.storage.className }}
{{- end }}

{{- if eq .Values.storage.type "gluster" }}
apiVersion: v1
kind: PersistentVolume
metadata:
  name: {{ .Values.persistence.fabricMspPvName}}
spec:
  accessModes:
  - ReadOnlyMany
  capacity:
    storage: 1Gi
  glusterfs:
    endpoints: gluster-endpoints
    path: {{ .Values.persistence.fabricGlusterVolumeName }}
  persistentVolumeReclaimPolicy: Retain
{{- end }}

Ceph 升级 to mimic:

A. 升级软件包:
    ceph-deploy install --release=mimic  --nogpgcheck --no-adjust-repos  pk8s1 pk8s2 pk8s3 
B. 
    1. 重启monitor服务:
        systemctl restart ceph-mon.target
    2. 重启osd服务:
        systemctl restart ceph-osd.target
    3. 重启mds服务:
        systemctl restart ceph-mds.target
    4. 升级client:
        yum -y install ceph-common
    5. 重启mgr服务:
        systemctl restart ceph-mgr.target
C. 查看是否正常:
    1.  ceph mon stat
    2. ceph osd stat
    3. ceph mds stat

D. 启用新的dashboard:
    1. ceph dashboard create-self-signed-cert
    2. ceph config set mgr mgr/dashboard/server_addr $IP
    3. ceph config set mgr mgr/dashboard/pk8s1/server_addr $IP
        ceph config set mgr mgr/dashboard/pk8s2/server_addr $IP
        ceph config set mgr mgr/dashboard/pk8s3/server_addr $IP
    4. ceph dashboard set-login-credentials <username> <password>

ceph mgr module enable dashboard

容器中挂载 cephfs的方式:

    mount.ceph 172.17.32.2:/ /mnt -o name=admin,secret=AQA2wjBbMljPKBAAID24oKDVT9NGuUxHzpo+1w==

Ceph backup:

    两种情况:
        1.  自己创建的pvc.yaml :
              annotations:
                    "helm.sh/hook": pre-install
                 "helm.sh/hook-delete-policy": "before-hook-creation"  (此删除策略,不管是使用 helm upgrade 还是 helm install 不会出新 resources exists。。。的提示)

Ceph-rbd image 挂载的几种情况:
    1.  Image 已经在pod中被挂载,然后可以在k8s集群外(即本地)二次挂载;

    2. 假如 image 已经在本地挂载,那么当pod 重新启动时,此image 将无法再被挂载,会报如下错误:


        当本地挂载的image 被卸载之后,pod会自动恢复正常。
3. 因为rbd 在k8s中只支持ReadWriteOnce 和 ReadOnlyMany 两种模式,由于需要往rbd image 中写入镜像,所以只有ReadWriteOnce这一种模式可以使用了,倘若在两个不同的pod中调用同一个pvc,那么就会报如下错误:

4. 使用python 脚本来挂载pod 所使用的原始的 rbd image:
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章