一、環境介紹
centos 7.6
二、實驗步驟
- ceph01節點和ceph02節點構建ceph集羣
- 擴容ceph集羣,將ceph03節點加入(擴容mon,osd)
- 模擬刪除osd
- 恢復osd
- ceph常用命令(創建mgr服務,添加刪除pool)
三、部署ceph集羣
- 三臺主機關閉防火牆,安裝常用工具
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
yum install wget -y ##安裝下載工具
yum install net-tools -y ##解決最小化安裝缺少ifconfig、route等命令
yum install bash-completion -y ##解決最小化安裝命令補全
yum install ntp ntpdate -y ##安裝時間同步工具
- ceph01和ceph02修改hosts主機名映射
vi /etc/hosts
192.168.100.10 ceph01
192.168.100.11 ceph02
- ceph01和ceph02做免密交互和時間同步
####在ceph01節點上執行免密交互
ssh-keygen -t rsa
ssh-copy-id ceph02
###時間同步,在ceph01上執行
ntpdate ntp.aliyun.com ##同步阿里雲時間
vi /etc/ntp.conf
##第8行改爲 restrict default nomodify
##第17行改爲 restrict 192.168.100.0 mask 255.255.255.0 nomodify notrap
##將21行到24行刪除##
21 server 0.centos.pool.ntp.org iburst
22 server 1.centos.pool.ntp.org iburst
23 server 2.centos.pool.ntp.org iburst
24 server 3.centos.pool.ntp.org iburst
###刪除的插入下面內容###
fudge 127.127.1.0 stratum 10
server 127.127.1.0
##重啓ntp服務
systemctl restart ntpd
systemctl enable ntpd
###在ceph02節點上執行
ntpdate ceph01
##做完後可以敲date查看ceph01和ceph02時間是否一致
date
- ceph01和ceph02上配置ceph現網源
rm -rf /etc/yum.repos.d/*
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
vim /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages for
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
- 配置完成後在ceph01和ceph02上更新yum源
yum update -y
- ceph01和ceph02安裝ceph軟件和配置工具
yum install ceph -y
yum -y install ceph-deploy
yum -y install python-setuptools
- 在ceph01上創建集羣
cd /etc/ceph
ceph-deploy new ceph01 ceph02
- 在ceph01上創建mon
cd /etc/ceph
ceph-deploy mon create-initial
- 在ceph01上創建osd
cd /etc/ceph
ceph-deploy osd create --data /dev/sdb ceph01
ceph-deploy osd create --data /dev/sdb ceph02
- 在ceph01上下發密鑰
ceph-deploy admin ceph01 ceph02
- 修改ceph01和ceph02的/ceph.client.admin.keyring權限
##ceph01和ceph02都執行
chmod +x /etc/ceph/ceph.client.admin.keyring
- 查看ceph集羣狀態
ceph -s
四、集羣擴容(將ceph03加入集羣)
- 修改3個節點的hosts文件
192.168.100.10 ceph01
192.168.100.11 ceph02
192.168.100.12 ceph03
- 在ceph01上添加ceph03的免交互
ssh-copy-id ceph03
- 修改ceph03的時間
ntpdate ceph01
- 配置ceph03的ceph源
rm -rf /etc/yum.repos.d/*
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
vi /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages for
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
- 配置完成後在ceph03上更新yum源
yum update -y
- ceph03安裝ceph軟件和配置工具
yum install ceph -y
yum -y install ceph-deploy
yum -y install python-setuptools
- 進入ceph01節點修改配置文件,並下發給ceph02和ceph03
vi /etc/ceph/ceph.conf
###修改
mon_initial_members = ceph01, ceph02, ceph03
mon_host = 192.168.100.10,192.168.100.11,192.168.100.12
##添加內部通信網段
public_network= 192.168.100.0/24
###下發配置給ceph02和ceph03
ceph-deploy --overwrite-conf admin ceph02 ceph03
- 擴容osd和mon
##在ceph03上修改下發的配置文件權限
chmod +x /etc/ceph/ceph.client.admin.keyring
###進入ceph01節點
ceph-deploy osd create --data /dev/sdb ceph03 ##擴容osd
ceph-deploy mon add ceph03 ##擴容mon
- 擴容完成後查看集羣信息
ceph -s
五、OSD數據恢復
- 查看osd信息
ceph osd tree
- 模擬刪除osd.2
ceph osd out osd.2
ceph osd crush remove osd.2
ceph auth del osd.2 ##刪除osd.2的認證
systemctl restart ceph-osd.target ##在ceph03上重啓
ceph osd rm osd.2 ##徹底刪除
- 刪除成功後查看osd信息
ceph osd tree
- 恢復被刪除的osd.2
進入ceph03節點(因爲osd.2在ceph03上)
df -hT ##查看osd掛載情況
tmpfs tmpfs 3.9G 52K 3.9G 1% /var/lib/ceph/osd/ceph-2
cd /var/lib/ceph/osd/ceph-2
more fsid ###查看osd信息
490cb174-2126-4e00-818e-b395c761fdde
##執行恢復
ceph osd create 490cb174-2126-4e00-818e-b395c761fdde
ceph auth add osd.2 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-2/keyring
ceph osd crush add 2 0.99899 host=ceph03
ceph osd in osd.2
- 恢復完成後在ceph03上重啓服務,查看osd信息
systemctl restart ceph-osd.target
ceph osd tree
六、ceph常用命令
創建mgr服務
ceph-deploy mgr create ceph01 ceph02 ceph03
創建pool
ceph osd pool create cinder 64 ##創建大小爲64G的cinder池子
ceph osd pool create nova 64
ceph osd pool create glance 64
##查看pool信息
ceph osd pool ls
cinder
nova
glance
刪除pool
進入ceph01節點
vi /etc/ceph/ceph.conf ###添加刪除權限
mon_allow_pool_delete=true
ceph-deploy --overwrite-conf admin ceph02 ceph03 ##下發配置給另外節點
systemctl restart ceph-mon.target ###三個節點重啓mon
ceph osd pool rm cinder cinder --yes-i-really-really-mean-it ##刪除cinder
修改pool名字
ceph osd pool rename cinder cinder01 ##將cinder改爲cinder01