官方工具Ceph-deploy部署chenlc

前言

Ceph是優秀的分佈式存儲集羣,可以爲雲計算提供可靠的存儲服務,本次作者示範部署ceph存儲集羣,使用的系統ubuntu18.04.2,部署的ceph版本爲Luminous,即Ceph 12.x。本次的範例是使用ceph的官方工具ceph-deploy進行部署服務。由於ceph的使用條件苛刻,不推薦在生產環境使用雲主機部署,所以本次的部署測試使用睿江雲的雲主機部署服務是一個合適的選擇。

先說明下搭建環境與各主機角色。

機器選擇:睿江雲平臺

節點選擇:廣東G(VPC網絡更安全、SSD磁盤性能高)

雲主機配置:1核2G

網絡選擇:VPC虛擬私有云(VPC網絡更安全、高效)

帶寬:5M

系統版本:Ubuntu18.04

雲主機數量:3

軟件版本:Ceph 12.0.X

官方工具Ceph-deploy部署chenlc

拓撲圖
官方工具Ceph-deploy部署chenlc

實戰部署

1.設置Hostname

vim /etc/hostname

127.0.0.1 localhost
$IP ceph$i

2.設置Hosts文件(IP 是存儲IP ceph-node是各ceph節點的hostname)

vim /etc/hosts

$IP $ceph$i

hosts文件可以分發密鑰後再scp給所有節點

3.設置ssh 並分發密鑰

cat ssh-script.sh

##!/bin/sh
##關閉GSSAPIAuthentication
sed -i 's@GSSAPIAuthentication yes@GSSAPIAuthentication no@' /etc/ssh/sshd_config
##開啓ssh可登陸root用戶
sed -i ‘/^#PermitRootLogin/c PermitRootLogin yes’ /etc/ssh/sshd_config
##增大ssh連接時間
sed -i 's@#LoginGraceTime 2m@LoginGraceTime 30m@' /etc/ssh/sshd_config
##關閉dns解析
sed -i 's@#UseDNS yes@UseDNS no@' /etc/ssh/sshd_config

apt-get install -y sshpass
sshpass -pdeploy123 ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected] "-o StrictHostKeyChecking=no"
sshpass -pdeploy123 ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected] "-o StrictHostKeyChecking=no"
sshpass -pdeploy123 ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected] "-o StrictHostKeyChecking=no"

sh ssh-script.sh

4.配置系統時間,設置統一ntp時鐘服務器

cat time-service ntp-synctime.sh

##! /bin/sh
'timedatectl set-timezone Asia/Shanghai'
apt-get install ntp ntpdate ntp-doc -y
sed -i 's@pool 0.ubuntu.pool.ntp.org iburst@pool 120.25.115.20 prefer@' /etc/ntp.conf
ntpdate 120.25.115.20 >>/dev/null
systemctl restart ntp
ntpq -pm >>/dev/null

mkdir /opt/DevOps/CallCenter -p

cat << EOF >>/opt/DevOps/CallCenter/CallCenter.sh
!/usr/bin/env bash
sync time daily##
grep sync time daily /var/spool/at/a* &>/dev/null || at -f /opt/DevOps/CallCenter/sync_time_daily.sh 23:59 &> /dev/null
EOF
cat << EOF >>/opt/DevOps/CallCenter/sync_time_daily.sh
!/usr/bin/env bash
sync time daily
systemctl stop ntp
ntpdate 120.25.115.20 || echo "sync time error"
systemctl start ntp
hwclock -w
EOF

chmod u+x /opt/DevOps/CallCenter/CallCenter.sh

crontab -l > crontab_sync_time

echo ‘/5 * /opt/DevOps/CallCenter/CallCenter.sh’ > crontab_sync_time

crontab crontab_sync_time

rm -f crontab_sync_time

sh time-service ntp-synctime.sh

5.安裝工具和修改ceph源和認證

cat apt-tool.sh

##!/bin/sh
apt-get update
apt-get install -y nslookup
apt-get install -y tcpdump
apt-get install -y bind-utils
apt-get install -y wget
apt-get -y install vim
apt-get -y install ifenslave
apt-get install -y python-minimal  python2.7 python-rbd  python-pip

echo export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/debian-luminous >>/etc/profile

echo export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc >>/etc/profile

source /etc/profile

sh apt-tool.sh

6.下面步驟使用 deploy 角色 執行

所有節點安裝ceph,deploy123自己更換成deploy用戶密碼

$ echo ‘deploy123’ |sudo apt-get install ceph

$ echo ‘deploy123’ |sudo pip install ceph-deploy

$ mkdir my-cluster

$ cd my-cluster

$ sudo ceph-deploy new ceph1 ceph2 ceph3

7.初始化mon

$ sudo ceph-deploy —overwrite-conf mon create-initial

[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring

[ceph_deploy.gatherkeys][INFO ] keyring ‘ceph.mon.keyring’ already exists

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring

[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpcrNzjv

8.把配置文件分發到各node節點上

$ sudo ceph-deploy admin ceph1 ceph2 ceph3

deploy@ceph1:~/my-cluster$ ll /etc/ceph/

total 24

drwxr-xr-x 2 root root 70 Sep 21 11:54 ./

drwxr-xr-x 96 root root 8192 Sep 21 11:41 ../

-rw———- 1 root root 63 Sep 21 11:54 ceph.client.admin.keyring

-rw-r—r— 1 root root 298 Sep 21 11:54 ceph.conf

-rw-r—r— 1 root root 92 Jul 3 09:33 rbdmap

9.創建osd

cat create-osd.sh

##!/bin/sh

cd ~/my-cluster

for Name in ceph1 ceph2 ceph3

do
    {
    echo 'deploy123' | sudo ceph-deploy disk zap $Name /dev/vdb
    echo 'deploy123' | sudo ceph-deploy disk zap $Name /dev/vdc
    echo 'deploy123' | sudo ceph-deploy disk zap $Name /dev/vdd

    echo 'deploy123' | sudo ceph-deploy osd create $Name --data /dev/vdb
    echo 'deploy123' | sudo ceph-deploy osd create $Name --data /dev/vdc
    echo 'deploy123' | sudo ceph-deploy osd create $Name --data /dev/vdd
    }&
done
wait!/bin/sh
cd ~/my-cluster

for Name in ceph1 ceph2 ceph3
do
    {
    echo 'deploy123' | sudo ceph-deploy disk zap $Name /dev/vdb
    echo 'deploy123' | sudo ceph-deploy disk zap $Name /dev/vdc
    echo 'deploy123' | sudo ceph-deploy disk zap $Name /dev/vdd

    echo 'deploy123' | sudo ceph-deploy osd create $Name --data /dev/vdb
    echo 'deploy123' | sudo ceph-deploy osd create $Name --data /dev/vdc
    echo 'deploy123' | sudo ceph-deploy osd create $Name --data /dev/vdd
    }&
done
wait

sh create-osd.sh

10.# 開啓mgr模式

$ sudo ceph-deploy mgr create ceph1 ceph2 ceph3

$ sudo ceph mgr module enable dashboard

11.# 創建osd pool

PG數量計算:https://ceph.com/pgcalc/

$ sudo ceph osd pool create {pool_name} 50 50

12.# 設置副本數及最小副本數

$ sudo ceph osd pool set {pool_name} size {num}

$ sudo ceph osd pool set {pool_name} min_size {num}

13.# 部署成功使用命令ceph -s 檢查狀態

$ sudo ceph-s

看到 HEALTH_OK則表示集羣狀態正常

看到池組 active+clean 則表示PG狀態正常
官方工具Ceph-deploy部署chenlc

後續

到此爲止,Ubuntu18&Ceph-deploy工具部署全部部署過程已經完成。如對小編的部署過程有疑問,可以在下方留言哦~

睿江雲:www.eflycloud.com

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章