服務器信息
IP | hostname | module |
192.168.7.11 | ceph1 | ceph-deploy,osd,mon |
192.168.7.12 | ceph2 | osd,mon |
192.168.7.13 | ceph3 | osd,mon |
前期準備
關閉selinux,安裝系統信息
三臺主機分別命名:hostname ceph{1,2,3}
分別配置本地解析
cat >> /etc/hosts << EOF
192.168.7.11 ceph1
192.168.7.12 ceph2
192.168.7.13 ceph3
EOF
開啓防火牆端口
firewall-cmd --permanent --add-port=6789/tcp #admin
firewall-cmd --permanent --add-port=6800-8000/tcp #osd
firewall-cmd --reload
更換yum源阿里源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all
yum makecache
同步時間
yum -y install ntpdate ntp
ntpdate cn.ntp.org.cn
systemctl restart ntpd && systemctl enable ntpd
ssh免密登陸,ceph1上配置
ssh-keygen
ssh-copy-id ceph1
ssh-copy-id ceph2
ssh-copy-id ceph3
官網推薦遇到報錯,從新開始
ceph-deploy purge {ceph-node} [{ceph-node}]
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys
rm ceph.*
安裝部署:
增加 yum配置文件
vim /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
更新源
yum clean all && yum list
安裝ceph-deploy,安裝EPEL存儲庫
yum install epel-release -y
yum -y install ceph-deploy python-pip
創建ceph目錄
mkdir ceph && cd ceph
創建集羣,創建在此目錄生成文件,ceph-deploy –cluster {cluster-name} new ceph1 ceph2 ceph3 #創建一個自定集羣名稱的ceph集羣,默認爲ceph
ceph-deploy new ceph1 ceph2 ceph3
Ceph配置文件、一個日誌文件和一個monitor密鑰環
遇到報錯
故障原因及解決:1.hosts本地解析未添加,添加本地解析
2.命令拷貝出錯,手動輸入命令,多執行幾次解決
修改配置文件
vim ceph.conf
[global]
fsid = df10a8e8-6610-4ab1-bc56-707e34f5530a
mon_initial_members = ceph1, ceph2, ceph3
mon_host = 192.168.7.11,192.168.7.12,192.168.7.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
rbd_default_features = 1
mon clock drift allowed = 2
mon clock drift warn backoff = 30
public_network = 192.168.7.0/24
部分操作系統kernel只支持layering,所以最好直接在配置文件指明創建rbd時默認支持的特性
rbd_default_features = 1
由於ceph分佈式對時鐘的同步要求很高,可以將同步誤差範圍調大;
mon clock drift allowed = 2
mon clock drift warn backoff = 30
網絡添加
public_network = 192.168.7.0/24
安裝Ceph軟件包
yum -y install ceph ceph-radosgw #都需要執行,或者管理端執行ceph-deploy install ceph1 ceph2 ceph3
ceph-deploy安裝截圖
初始化monitor(s),並收集密鑰
ceph-deploy mon create-initial
使用ceph-deploy把配置文件和admin密鑰拷貝到所有節點,以便您每次執行Ceph命令行時就無需指定monitor地址和ceph.client.admin.keyringceph.client.admin.keyring
ceph-deploy admin ceph1 ceph2 ceph3
檢查集羣狀態
ceph -s #health: HEALTH_OK就表明成功
三臺各增加100G硬盤,使用硬盤創建OSD
ceph-deploy disk zap ceph1 /dev/sdb
ceph-deploy osd create ceph1 --data /dev/sdb
ceph-deploy disk zap ceph2 /dev/sdb
ceph-deploy osd create ceph2 --data /dev/sdb
ceph-deploy disk zap ceph3 /dev/sdb
ceph-deploy osd create ceph3 --data /dev/sdb
注:如果要在LVM捲上創建OSD,則參數 –data 必須是 volume_group/lv_name ,而不是卷的
塊設備的路徑
部署mgr,L版以後才需要部署
ceph-deploy mgr create ceph1 ceph2 ceph3
檢查集羣狀態
ceph -s #data有數據表明成功
可開啓dashboard模塊,用於UI查看
ceph mgr module enable dashboard
瀏覽器訪問:IP:7000