Ceph 手動部署

規劃集羣

  • 生產環境
    • 至少3臺物理機組成Ceph集羣
    • 雙網卡
  • 測試環境
    • 1臺主機也可以
    • 單網卡也可以

準備工作

  • 在所有Ceph節點上安裝NTP

    [root@test ~]# yum install ntp
  • 在所有Ceph節點上檢查Iptables規則,確定打開6789端口,及6800:7300端口

    [root@test ~]# iptables -A INPUT -i eth0 -p tcp -s 10.10.8.0/24 --dport 6789 -j ACCEPT
    [root@test ~]# iptables -A INPUT -i eth0 -p tcp -s 10.10.8.0/24 --dport 6800:7300 -j ACCEPT
  • 關閉SELinux

    [root@test ~]# setenforce 0

獲取Ceph包

  • 添加源,在/etc/yum.repos.d/目錄下創建ceph.repo

    [root@test ~]# cd /etc/yum.repos.d/
    [root@test ~]# touch ceph.repo
     
    將下面的內容複製到ceph.repo中,但是注意將ceph-release字段替換爲想要安裝的ceph版本,這裏安裝的是jewel版本;
    將distro替換爲自己系統的版本,當前用的是el7
    [ceph]
    name=Ceph packages for $basearch
    baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/$basearch
    enabled=1
    priority=2
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
     
    [ceph-noarch]
    name=Ceph noarch packages
    baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
    enabled=1
    priority=2
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
     
    [ceph-source]
    name=Ceph source packages
    baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/SRPMS
    enabled=0
    priority=2
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
     
     
    [root@test ~]# cat ceph.repo
    [ceph]
    name=Ceph packages for $basearch
    baseurl=http://download.ceph.com/rpm-jewel/el7/$basearch
    enabled=1
    priority=2
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
     
    [ceph-noarch]
    name=Ceph noarch packages
    baseurl=http://download.ceph.com/rpm-jewel/el7/noarch
    enabled=1
    priority=2
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
     
    [ceph-source]
    name=Ceph source packages
    baseurl=http://download.ceph.com/rpm-jewel/el7/SRPMS
    enabled=0
    priority=2
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
  • 安裝ceph包

    [root@test ~]# yum install ceph
    [root@test ~]# rpm -qa | grep ceph
    ceph-mds-10.2.10-0.el7.x86_64
    ceph-10.2.10-0.el7.x86_64
    libcephfs1-10.2.10-0.el7.x86_64
    python-cephfs-10.2.10-0.el7.x86_64
    ceph-common-10.2.10-0.el7.x86_64
    ceph-base-10.2.10-0.el7.x86_64
    ceph-osd-10.2.10-0.el7.x86_64
    ceph-mon-10.2.10-0.el7.x86_64
    ceph-selinux-10.2.10-0.el7.x86_64

手動部署

配置Ceph

  •  配置ceph.conf

    [root@test ~]# touch ceph.conf
  • 生成Ceph Cluster ID

    [root@test ~]# uuidgen
    1ea51317-3ccf-4a59-88ef-7d86f7ee1c3b
    [root@test ~]# echo "fsid = 1ea51317-3ccf-4a59-88ef-7d86f7ee1c3b" >> /etc/ceph/ceph.conf
    [root@test ~]# cat ceph.conf
    [global]
    fsid = 1ea51317-3ccf-4a59-88ef-7d86f7ee1c3b

部署Monitor

  • 配置中添加初始Monitor hostname

    [root@test ~]# echo "mon_initial_members = test" >> /etc/ceph/ceph.conf
  • 配置中添加初始Monitor IP地址

    [root@test ~]# echo "mon_host = 10.10.8.19" >> /etc/ceph/ceph.conf
  • 生成Monitor的keyring

    [root@test ~]# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
  • 生成client的keyring

    [root@test ~]# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'
  • 將client keyring加入到monitor keyring

    [root@test ~]# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
  • 生成monmap

    [root@test ~]# monmaptool --create --add test 10.10.8.19 --fsid 1ea51317-3ccf-4a59-88ef-7d86f7ee1c3b /tmp/monmap
  • 創建Monitor目錄

    [root@test ~]# mkdir /var/lib/ceph/mon/ceph-test
  • 創建Monitor

    [root@test ~]# ceph-mon --mkfs -i test --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
  • 填充ceph.conf

    將以下內容添加到ceph.conf中,注意替換相關字段
    public network = {network}[, {network}]
    cluster network = {network}[, {network}]
    auth cluster required = cephx
    auth service required = cephx
    auth client required = cephx
    osd journal size = {n}
     
    [root@test ~]# cat /etc/ceph/ceph.conf
    [global]
    fsid = 1ea51317-3ccf-4a59-88ef-7d86f7ee1c3b
    mon initial members = test
    mon_host = 10.10.8.19
    public_network = 10.10.8.0/24
    cluster_network = 10.10.8.0/24
    osd_journal_size = 2048
  • 創建done文件

    [root@test ~]# touch /var/lib/ceph/mon/ceph-test/done
  • 修改權限

    [root@test ~]# chown ceph:ceph  /var/lib/ceph/mon/ceph-test/ -R
  • 啓動Monitor服務

    [root@test ~]# systemctl start ceph-mon@test.service
  • 驗證

    [root@test ~]# ceph -s
         cluster 1ea51317-3ccf-4a59-88ef-7d86f7ee1c3b
         health HEALTH_ERR
                no osds
         monmap e1: 1 mons at {test=10.10.8.19:6789/0}
                election epoch 3, quorum 0 test
         osdmap e1: 0 osds: 0 up, 0 in
                flags sortbitwise,require_jewel_osds
          pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
                0 kB used, 0 kB / 0 kB avail
                      64 creating

部署OSD

  • 生成OSD的UUID

    [root@test ~]# uuidgen
    8ff77024-27dd-423b-9f66-0a7cefcd3f53
  • 創建OSD

    [root@test ~]# ceph osd create 8ff77024-27dd-423b-9f66-0a7cefcd3f53
    0
  • 創建OSD目錄

    [root@test ~]# mkdir /var/lib/ceph/osd/ceph-0
  • 格式化OSD磁盤

    [root@test ~]# mkfs -t xfs /dev/vdb
  • 掛載OSD磁盤

    [root@test ~]# mount /dev/vdb /var/lib/ceph/osd/ceph-0/
  • 初始化OSD

    [root@test ~]# ceph-osd -i 0 --mkfs --mkkey --osd-uuid 8ff77024-27dd-423b-9f66-0a7cefcd3f53
  • 註冊OSD keyring

    [root@test ~]# ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring
  • 添加host到CRUSH map

    [root@test ~]# ceph osd tree
    ID WEIGHT TYPE NAME    UP/DOWN REWEIGHT PRIMARY-AFFINITY
    -1      0 root default                                  
     0      0 osd.0           down        0          1.0000
    [root@test ~]# ceph osd crush add-bucket test host
  • 移動host到CRUSH map中的root

    [root@test ~]# ceph osd crush move test root=default
  • 添加OSD到CRUSH下的host

    [root@test ~]# ceph osd crush add osd.0 1.0 host=test
  • 修改權限

    [root@test ~]# chown ceph:ceph /var/lib/ceph/osd/ceph-0/ -R
  • 啓動OSD服務

    [root@test ~]# systemctl start ceph-osd@0.service
  • 驗證

    [root@test ~]# ceph -s
         cluster 1ea51317-3ccf-4a59-88ef-7d86f7ee1c3b
         health HEALTH_OK
         monmap e1: 1 mons at {test=10.10.8.19:6789/0}
                election epoch 3, quorum 0 test
         osdmap e12: 1 osds: 1 up, 1 in
                flags sortbitwise,require_jewel_osds
          pgmap v16: 64 pgs, 1 pools, 0 bytes data, 0 objects
                2080 MB used, 18389 MB / 20470 MB avail
                      64 active+clean
    [root@test ~]# ceph osd tree
    ID WEIGHT  TYPE NAME                UP/DOWN REWEIGHT PRIMARY-AFFINITY
    -1 1.00000 root default                                                                                       
    -2 1.00000     host test                                 
     0 1.00000         osd.0                 up  1.00000          1.00000
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章