centos 7.7單機安裝ceph

服務器/虛擬機準備
1.配置網絡,保證機器可以使用公網網絡源
2.額外添加幾個盤,作爲數據盤,此處我加了/dev/vdb /dev/vdc 兩塊10G的盤
系統信息

[root@localhost ceph-cluster]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)```

準備源
添加centos源和epel源,本文全部使用阿里雲的源

[root@localhost ~] rm /etc/yum.repos.d/* -rf
[root@localhost ~] curl http://mirrors.aliyun.com/repo/Centos-7.repo > /etc/yum.repos.d/Centos-7.repo 
[root@localhost ~] curl http://mirrors.aliyun.com/repo/epel-7.repo > /etc/yum.repos.d/epel.repo

添加ceph源,在/etc/yum.repos.d/目錄下創建ceph.repo文件,粘貼以下內容

[Ceph-SRPMS]
name=Ceph SRPMS packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS/
enabled=1
gpgcheck=0
type=rpm-md
 
[Ceph-aarch64]
name=Ceph aarch64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/aarch64/
enabled=1
gpgcheck=0
type=rpm-md
 
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
enabled=1
gpgcheck=0
type=rpm-md
 
[Ceph-x86_64]
name=Ceph x86_64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
enabled=1
gpgcheck=0
type=rpm-md

更新源

[root@dev179 ~] yum clean all
[root@dev179 ~] yum makecache

服務器配置
// 關閉selinux

[root@localhost ~] sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@localhost ~] setenforce 0

// 關閉防火牆

[root@localhost ~] systemctl stop firewalld 
[root@localhost ~] systemctl disable firewalld

// 設置服務器ip到/etc/hosts,把192.168.172.171改爲服務器的IP地址

[root@localhost ~] echo 192.168.172.171 $HOSTNAME >> /etc/hosts

// 準備臨時部署目錄

[root@localhost ~] rm -rf /root/ceph-cluster && mkdir -p /root/ceph-cluster && cd /root/ceph-cluster

部署
// 安裝部署軟件

[root@localhost ceph-cluster] yum install ceph ceph-radosgw ceph-deploy -y

// 初始化配置,後面的步驟都必須進入/root/ceph-cluster目錄中再執行

[root@localhost ceph-cluster] ceph-deploy new $HOSTNAME

// 更新默認配置文件

[root@localhost ceph-cluster] echo osd pool default size = 1 >> ceph.conf
[root@localhost ceph-cluster] echo osd crush chooseleaf type = 0 >> ceph.conf
[root@localhost ceph-cluster] echo osd max object name len = 256 >> ceph.conf
[root@localhost ceph-cluster] echo osd journal size = 128 >> ceph.conf

// 初始化監控節點

[root@localhostceph-cluster] ceph-deploy mon create-initial

// 準備磁盤,此處/dev/vdb /dev/vdc /dev/vdd根據實際情況修改

[root@localhost ceph-cluster] ceph-deploy osd prepare $HOSTNAME:/dev/vdb $HOSTNAME:/dev/vdc $HOSTNAME:/dev/vdd

上步prepare後,那3個盤會自動創建文件系統並掛載,通過df查看具體掛載目錄

// 通過df查看具體掛載目錄

[root@localhost ceph-cluster]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
devtmpfs                devtmpfs  470M     0  470M   0% /dev
tmpfs                   tmpfs     487M     0  487M   0% /dev/shm
tmpfs                   tmpfs     487M  8.4M  478M   2% /run
tmpfs                   tmpfs     487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos-root xfs        17G  4.4G   13G  26% /
/dev/sda1               xfs      1014M  171M  844M  17% /boot
tmpfs                   tmpfs      98M     0   98M   0% /run/user/0
tmpfs                   tmpfs      98M   12K   98M   1% /run/user/42
/dev/sdb1               xfs       9.9G  108M  9.8G   2% /var/lib/ceph/osd/ceph-0
/dev/sdc1               xfs       9.9G  108M  9.8G   2% /var/lib/ceph/osd/ceph-1

// 激活osd

[root@localhost ceph-cluster]# ceph-deploy osd activate $HOSTNAME:/var/lib/ceph/osd/ceph-0 $HOSTNAME:/var/lib/ceph/osd/ceph-1

檢查

[root@localhost ceph-cluster]# ceph -s
    cluster 07f9b419-f68a-45c9-85a0-2a59447ce7ab
     health HEALTH_OK
     monmap e1: 1 mons at {localhost=[::1]:6789/0}
            election epoch 3, quorum 0 localhost
     osdmap e10: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v18: 64 pgs, 1 pools, 0 bytes data, 0 objects
            214 MB used, 19987 MB / 20201 MB avail
                  64 active+clean
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章