目錄
摘要:
本文介紹的Ceph 集羣搭建是基於luminous版本(ceph -v: ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)),ceph各個版本會有不同。
另外我是基於ubuntu物理機(後文提到的admin-node節點)上創建了3個CentOS VM(後文提到的node1/2/3節點)來搭建Ceph集羣的。先上個圖來看看我們要搭建的Ceph集羣是屬於Ceph架構圖的哪一部分? 它對應下圖紅色方框裏的RADOS Cluster
目錄
一、準備機器
本文描述如何在 一臺物理機Ubuntu + 三臺虛擬機VM CentOS 7 下搭建 Ceph 存儲集羣(STORAGE CLUSTER)。
一共4臺機器,其中1個是管理節點,其他3個是ceph節點:
hostname | ip | role | 描述 |
---|---|---|---|
admin-node | 10.38.50.131 | ceph-deploy | 管理節點(物理機Ubuntu) |
node1 | 192.168.122.157 | mon.node1 | ceph節點,監控節點(虛擬機VM CentOS 7) |
node2 | 192.168.122.158 | osd.0 | ceph節點,OSD節點(虛擬機VM CentOS 7) |
node3 | 192.168.122.159 | osd.1 | ceph節點,OSD節點(虛擬機VM CentOS 7) |
管理節點:admin-node
ceph節點:node1, node2, node3
所有節點:admin-node, node1, node2, node3
1. 修改主機名
# vi /etc/hostname
2. 修改hosts文件
# vi /etc/hosts
10.38.50.131 admin-node
192.168.122.157 node1
192.168.122.158 node2
192.168.122.159 node3
3. 確保聯通性(管理節點)
用 ping 短主機名( hostname -s )的方式確認網絡聯通性。解決掉可能存在的主機名解析問題。
$ ping node1
$ ping node2
$ ping node3
二、ceph節點安裝
1. 安裝NPT(所有節點)
我們建議在所有 Ceph 節點上安裝 NTP 服務(特別是 Ceph Monitor 節點),以免因時鐘漂移導致故障,詳情見時鐘。
sudo yum install ntp ntpdate ntp-doc
Ubuntu下:
sudo apt-get install ntp ntpdate ntp-doc
2. 安裝SSH(所有節點)
sudo yum install openssh-server
3. 創建部署 CEPH 的用戶(所有節點)
ceph-deploy 工具必須以普通用戶登錄 Ceph 節點,且此用戶擁有無密碼使用 sudo 的權限,因爲它需要在安裝軟件及配置文件的過程中,不必輸入密碼。
建議在集羣內的所有 Ceph 節點上給 ceph-deploy 創建一個特定的用戶,但不要用 “ceph” 這個名字。
1) 在各 Ceph 節點創建新用戶
sudo useradd -d /home/yjiang2 -m yjiang2
sudo passwd yjiang2
2) 確保各 Ceph 節點上新創建的用戶都有 sudo 權限
echo "yjiang2 ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/yjiang2
sudo chmod 0440 /etc/sudoers.d/yjiang2
4. 允許無密碼SSH登錄(管理節點)
因爲 ceph-deploy 不支持輸入密碼,你必須在管理節點上生成 SSH 密鑰並把其公鑰分發到各 Ceph 節點。 ceph-deploy 會嘗試給初始 monitors 生成 SSH 密鑰對。
1) 生成 SSH 密鑰對
不要用 sudo 或 root 用戶。提示 “Enter passphrase” 時,直接回車,口令即爲空:
//切換用戶,如不特別說明,後續的操作均在該用戶下進行
# su yjiang2 //生成密鑰對
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/yjiang2/.ssh/id_rsa):
Created directory '/home/yjiang2/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/yjiang2/.ssh/id_rsa.
Your public key has been saved in /home/yjiang2/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Tb0VpUOZtmh+QBRjUOE0n2Uy3WuoZVgXn6TBBb2SsGk yjiang2@admin-node
The key's randomart image is:
+---[RSA 2048]----+
| .+@=OO*|
| *.BB@=|
| ..O+Xo+|
| o E+O.= |
| S oo=.o |
| .. . |
| . |
| |
| |
+----[SHA256]-----+
2) 把公鑰拷貝到各 Ceph 節點
ssh-copy-id yjiang2@node1
ssh-copy-id yjiang2@node2
ssh-copy-id yjiang2@node3
完成後, /home/yjiang2/.ssh/ 路徑下:
- admin-node 多了文件
id_rsa
、id_rsa.pub
和known_hosts
; - node1, node2, node3 多了文件
authorized_keys
。
3) 修改~/.ssh/config 文件
修改 ~/.ssh/config 文件(沒有則新增),這樣 ceph-deploy 就能用你所建的用戶名登錄 Ceph 節點了。
// 必須使用sudo
$ sudo vi ~/.ssh/config
Host admin-node
Hostname admin-node
User yjiang2
Host node1
Hostname node1
User yjiang2
Host node2
Hostname node2
User yjiang2
Host node3
Hostname node3
User yjiang2
4) 測試ssh能否成功
$ ssh yjiang2@node1
$ exit
$ ssh yjiang2@node2
$ exit
$ ssh yjiang2@node3
$ exit
- 問題:如果出現 "Bad owner or permissions on /home/yjiang2/.ssh/config",執行命令修改文件權限。
$ sudo chmod 644 ~/.ssh/config
5. 系統引導時聯接網絡(ceph節點:node1+node2+node3)
Ceph 的各個OSD 進程通過網絡互聯並向 Monitors 上報自己的狀態。如果網絡默認爲 off ,那麼 Ceph 集羣在啓動時就不能上線,直到你打開網絡。我的network interface是eth0
$sudo grep ONBOOT -rn /etc/sysconfig/network-scripts/
/etc/sysconfig/network-scripts/ifcfg-lo:8:ONBOOT=yes
/etc/sysconfig/network-scripts/ifup-ippp:22:if [ "${2}" = "boot" -a "${ONBOOT}" = "no" ]; then
/etc/sysconfig/network-scripts/ifup-plip:9:if [ "foo$2" = "fooboot" -a "${ONBOOT}" = "no" ]; then
/etc/sysconfig/network-scripts/ifup-plusb:20:if [ "foo$2" = "fooboot" -a "${ONBOOT}" = "no" ]
/etc/sysconfig/network-scripts/ifup-ppp:40:if [ "${2}" = "boot" -a "${ONBOOT}" = "no" ]; then
/etc/sysconfig/network-scripts/ifcfg-eth0:15:ONBOOT=yes
/etc/sysconfig/network-scripts/ifcfg-eth0.bak:15:ONBOOT=yes
//確保ONBOOT 設置成了 yes
6. 開放所需端口(ceph節點:node1+node2+node3)
Ceph Monitors 之間默認使用 6789 端口通信, OSD 之間默認用 6800:7300 這個範圍內的端口通信。Ceph OSD 能利用多個網絡連接進行與客戶端、monitors、其他 OSD 間的複製和心跳的通信。
$ sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent
// 或者關閉防火牆
$ sudo systemctl stop firewalld
$ sudo systemctl disable firewalld
7. 終端(TTY)(ceph節點:node1+node2+node3)
在 CentOS 和 RHEL 上執行 ceph-deploy 命令時可能會報錯。如果你的 Ceph 節點默認設置了 requiretty ,執行
$ sudo visudo
找到 Defaults requiretty 選項,把它改爲 Defaults:ceph !requiretty 或者直接註釋掉,這樣 ceph-deploy 就可以用之前創建的用戶(創建部署 Ceph 的用戶 )連接了。
編輯配置文件 /etc/sudoers 時,必須用 sudo visudo 而不是文本編輯器。
8. 關閉selinux(ceph節點:node1+node2+node3)
$ sudo setenforce 0
setenforce: SELinux is disabled
要使 SELinux 配置永久生效(如果它的確是問題根源),需修改其配置文件 /etc/selinux/config:
$ sudo sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
即修改 SELINUX=disabled。
9. 配置EPEL源(管理節點:admin-node)
$ sudo apt-get install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*
10. 把軟件包源加入軟件庫(管理節點:admin-node)
$ sudo vi /etc/yum/repos.d/ceph.repo
把如下內容粘帖進去,保存到 /etc/yum.repos.d/ceph.repo 文件中。
[ceph]
name=ceph
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/noarch/
gpgcheck=0
11. 更新軟件庫並安裝ceph-deploy(管理節點:admin-node)
$ sudo apt-get update && sudo apt-get install ceph-deploy
$ sudo apt-get install yum-plugin-priorities
時間可能比較久,耐心等待。
三、搭建集羣
在 管理節點 下執行如下步驟:
1. 安裝準備,創建文件夾
在管理節點上創建一個目錄,用於保存 ceph-deploy 生成的配置文件和密鑰對。
$ cd ~
$ mkdir ceph-cluster
$ cd ceph-cluster
注:若安裝ceph後遇到麻煩可以使用以下命令進行清除包和配置:
// 刪除安裝包
$ ceph-deploy purge admin-node node1 node2 node3
// 清除配置
$ ceph-deploy purgedata admin-node node1 node2 node3
$ ceph-deploy forgetkeys
2. 創建集羣的監控節點
創建集羣並初始化監控節點,命令格式爲:$ ceph-deploy new {initial-monitor-node(s)}
這裏node1是monitor節點,所以執行:
$ ceph-deploy new node1
完成後,ceph-cluster 下多了3個文件:ceph.conf
、ceph-deploy-ceph.log
和 ceph.mon.keyring
。
- 問題:如果出現 "[ceph_deploy][ERROR ] RuntimeError: remote connection got closed, ensure
requiretty
is disabled for node1",執行 sudo visudo 將 Defaults requiretty 註釋掉。
3. 修改配置文件
$ cat ceph.conf
內容如下:
[global]
fsid = 027d5b3c-e011-4a92-9449-c8755cd8f500
mon_initial_members = node1
mon_host = 192.168.122.157
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
執行下面命令,把 Ceph 配置文件裏的默認副本數從 3 改成 2 ,這樣只有兩個 OSD 也可以達到 active + clean 狀態。把 osd pool default size = 2 加入 [global] 段:
sed -i '$a\osd pool default size = 2' ceph.conf
如果有多個網卡,可以把 public network/cluster network 寫入 Ceph 配置文件的 [global] 段:
public network = {ip-address}/{netmask}
cluster network={ip-addesss}/{netmask}
<!-----以上兩個網絡是新增部分,默認只是添加public network,一般生產都是定義兩個網絡,集羣網絡和數據網絡分開-------->
4. 安裝Ceph(終於到了安裝Ceph的時候了,喝口水去)
在所有節點上安裝ceph:
$ ceph-deploy install admin-node node1 node2 node3
- 問題:[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: yum -y install epel-release
解決方法:
sudo apt-get -y remove epel-release
相關安裝問題,參考:ceph搭建過程中遇到的問題彙總
安裝完成的log(NND終於提示安裝完成了)。
~$ ceph-deploy install admin-node node1 node2 node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/yjiang2/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.38): /usr/bin/ceph-deploy install admin-node node1 node2 node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f149da5ec20>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : False
[ceph_deploy.cli][INFO ] func : <function install at 0x7f149e38f5f0>
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['admin-node', 'node1', 'node2', 'node3']
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts admin-node node1 node2 node3
[ceph_deploy.install][DEBUG ] Detecting platform for host admin-node ...
[admin-node][DEBUG ] connection detected need for sudo
[sudo] password for yjiang2:
[admin-node][DEBUG ] connected to host: admin-node
[admin-node][DEBUG ] detect platform information from remote host
[admin-node][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Ubuntu 18.04 bionic
[admin-node][INFO ] installing Ceph on admin-node
[admin-node][INFO ] Running command: sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[admin-node][DEBUG ] Reading package lists...
[admin-node][DEBUG ] Building dependency tree...
[admin-node][DEBUG ] Reading state information...
[admin-node][DEBUG ] ca-certificates is already the newest version (20180409).
[admin-node][DEBUG ] apt-transport-https is already the newest version (1.6.11).
[admin-node][DEBUG ] The following packages were automatically installed and are no longer required:
[admin-node][DEBUG ] linux-headers-4.15.0-50 linux-headers-4.15.0-50-generic
[admin-node][DEBUG ] linux-headers-4.18.0-15 linux-headers-4.18.0-15-generic
[admin-node][DEBUG ] linux-image-4.15.0-50-generic linux-image-4.18.0-15-generic
[admin-node][DEBUG ] linux-modules-4.15.0-50-generic linux-modules-4.18.0-15-generic
[admin-node][DEBUG ] linux-modules-extra-4.15.0-50-generic linux-modules-extra-4.18.0-15-generic
[admin-node][DEBUG ] Use 'sudo apt autoremove' to remove them.
[admin-node][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 69 not upgraded.
[admin-node][INFO ] Running command: sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[admin-node][DEBUG ] Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
[admin-node][DEBUG ] Hit:2 http://cn.archive.ubuntu.com/ubuntu bionic InRelease
[admin-node][DEBUG ] Hit:3 http://download.ceph.com/debian-luminous bionic InRelease
[admin-node][DEBUG ] Get:4 http://cn.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
[admin-node][DEBUG ] Hit:5 http://cn.archive.ubuntu.com/ubuntu bionic-backports InRelease
[admin-node][DEBUG ] Fetched 177 kB in 2s (78.8 kB/s)
[admin-node][DEBUG ] Reading package lists...
[admin-node][INFO ] Running command: sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
[admin-node][DEBUG ] Reading package lists...
[admin-node][DEBUG ] Building dependency tree...
[admin-node][DEBUG ] Reading state information...
[admin-node][DEBUG ] ceph is already the newest version (12.2.11-0ubuntu0.18.04.2).
[admin-node][DEBUG ] ceph-mon is already the newest version (12.2.11-0ubuntu0.18.04.2).
[admin-node][DEBUG ] ceph-osd is already the newest version (12.2.11-0ubuntu0.18.04.2).
[admin-node][DEBUG ] radosgw is already the newest version (12.2.11-0ubuntu0.18.04.2).
[admin-node][DEBUG ] ceph-mds is already the newest version (12.2.11-0ubuntu0.18.04.2).
[admin-node][DEBUG ] The following packages were automatically installed and are no longer required:
[admin-node][DEBUG ] linux-headers-4.15.0-50 linux-headers-4.15.0-50-generic
[admin-node][DEBUG ] linux-headers-4.18.0-15 linux-headers-4.18.0-15-generic
[admin-node][DEBUG ] linux-image-4.15.0-50-generic linux-image-4.18.0-15-generic
[admin-node][DEBUG ] linux-modules-4.15.0-50-generic linux-modules-4.18.0-15-generic
[admin-node][DEBUG ] linux-modules-extra-4.15.0-50-generic linux-modules-extra-4.18.0-15-generic
[admin-node][DEBUG ] Use 'sudo apt autoremove' to remove them.
[admin-node][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 69 not upgraded.
[admin-node][INFO ] Running command: sudo ceph --version
[admin-node][DEBUG ] ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)
[ceph_deploy.install][DEBUG ] Detecting platform for host node1 ...
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.6.1810 Core
[node1][INFO ] installing Ceph on node1
[node1][INFO ] Running command: sudo yum clean all
[node1][DEBUG ] Loaded plugins: fastestmirror, langpacks
[node1][DEBUG ] Cleaning repos: base centos-ceph-luminous extras updates
[node1][DEBUG ] Cleaning up list of fastest mirrors
[node1][INFO ] Running command: sudo yum -y install ceph ceph-radosgw
[node1][DEBUG ] Loaded plugins: fastestmirror, langpacks
[node1][DEBUG ] Determining fastest mirrors
[node1][DEBUG ] * base: mirrors.aliyun.com
[node1][DEBUG ] * extras: mirrors.aliyun.com
[node1][DEBUG ] * updates: mirrors.aliyun.com
[node1][DEBUG ] Resolving Dependencies
[node1][DEBUG ] --> Running transaction check
[node1][DEBUG ] ---> Package ceph.x86_64 2:12.2.11-0.el7 will be installed
[node1][DEBUG ] --> Processing Dependency: ceph-osd = 2:12.2.11-0.el7 for package: 2:ceph-12.2.11-0.el7.x86_64
[node1][DEBUG ] --> Processing Dependency: ceph-mon = 2:12.2.11-0.el7 for package: 2:ceph-12.2.11-0.el7.x86_64
[node1][DEBUG ] --> Processing Dependency: ceph-mgr = 2:12.2.11-0.el7 for package: 2:ceph-12.2.11-0.el7.x86_64
[node1][DEBUG ] --> Processing Dependency: ceph-mds = 2:12.2.11-0.el7 for package: 2:ceph-12.2.11-0.el7.x86_64
[node1][DEBUG ] ---> Package ceph-radosgw.x86_64 2:12.2.11-0.el7 will be installed
[node1][DEBUG ] --> Processing Dependency: ceph-selinux = 2:12.2.11-0.el7 for package: 2:ceph-radosgw-12.2.11-0.el7.x86_64
[node1][DEBUG ] --> Processing Dependency: mailcap for package: 2:ceph-radosgw-12.2.11-0.el7.x86_64
[node1][DEBUG ] --> Running transaction check
[node1][DEBUG ] ---> Package ceph-mds.x86_64 2:12.2.11-0.el7 will be installed
[node1][DEBUG ] --> Processing Dependency: ceph-base = 2:12.2.11-0.el7 for package: 2:ceph-mds-12.2.11-0.el7.x86_64
[node1][DEBUG ] ---> Package ceph-mgr.x86_64 2:12.2.11-0.el7 will be installed
[node1][DEBUG ] --> Processing Dependency: python-werkzeug for package: 2:ceph-mgr-12.2.11-0.el7.x86_64
[node1][DEBUG ] --> Processing Dependency: python-pecan for package: 2:ceph-mgr-12.2.11-0.el7.x86_64
[node1][DEBUG ] --> Processing Dependency: python-jinja2 for package: 2:ceph-mgr-12.2.11-0.el7.x86_64
[node1][DEBUG ] --> Processing Dependency: python-cherrypy for package: 2:ceph-mgr-12.2.11-0.el7.x86_64
[node1][DEBUG ] --> Processing Dependency: pyOpenSSL for package: 2:ceph-mgr-12.2.11-0.el7.x86_64
[node1][DEBUG ] ---> Package ceph-mon.x86_64 2:12.2.11-0.el7 will be installed
[node1][DEBUG ] --> Processing Dependency: python-flask for package: 2:ceph-mon-12.2.11-0.el7.x86_64
[node1][DEBUG ] ---> Package ceph-osd.x86_64 2:12.2.11-0.el7 will be installed
[node1][DEBUG ] ---> Package ceph-selinux.x86_64 2:12.2.11-0.el7 will be installed
[node1][DEBUG ] ---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
[node1][DEBUG ] --> Running transaction check
[node1][DEBUG ] ---> Package ceph-base.x86_64 2:12.2.11-0.el7 will be installed
[node1][DEBUG ] ---> Package pyOpenSSL.x86_64 0:0.13.1-4.el7 will be installed
[node1][DEBUG ] ---> Package python-cherrypy.noarch 0:3.2.2-4.el7 will be installed
[node1][DEBUG ] ---> Package python-flask.noarch 1:0.10.1-4.el7 will be installed
[node1][DEBUG ] --> Processing Dependency: python-itsdangerous for package: 1:python-flask-0.10.1-4.el7.noarch
[node1][DEBUG ] ---> Package python-jinja2.noarch 0:2.7.2-3.el7_6 will be installed
[node1][DEBUG ] --> Processing Dependency: python-babel >= 0.8 for package: python-jinja2-2.7.2-3.el7_6.noarch
[node1][DEBUG ] --> Processing Dependency: python-markupsafe for package: python-jinja2-2.7.2-3.el7_6.noarch
[node1][DEBUG ] ---> Package python-werkzeug.noarch 0:0.9.1-2.el7 will be installed
[node1][DEBUG ] ---> Package python2-pecan.noarch 0:1.1.2-1.el7 will be installed
[node1][DEBUG ] --> Processing Dependency: python-webtest for package: python2-pecan-1.1.2-1.el7.noarch
[node1][DEBUG ] --> Processing Dependency: python-webob for package: python2-pecan-1.1.2-1.el7.noarch
[node1][DEBUG ] --> Processing Dependency: python-singledispatch for package: python2-pecan-1.1.2-1.el7.noarch
[node1][DEBUG ] --> Processing Dependency: python-simplegeneric for package: python2-pecan-1.1.2-1.el7.noarch
[node1][DEBUG ] --> Processing Dependency: python-mako for package: python2-pecan-1.1.2-1.el7.noarch
[node1][DEBUG ] --> Processing Dependency: python-logutils for package: python2-pecan-1.1.2-1.el7.noarch
[node1][DEBUG ] --> Running transaction check
[node1][DEBUG ] ---> Package python-babel.noarch 0:0.9.6-8.el7 will be installed
[node1][DEBUG ] ---> Package python-itsdangerous.noarch 0:0.23-2.el7 will be installed
[node1][DEBUG ] ---> Package python-logutils.noarch 0:0.3.3-3.el7 will be installed
[node1][DEBUG ] ---> Package python-mako.noarch 0:0.8.1-2.el7 will be installed
[node1][DEBUG ] --> Processing Dependency: python-beaker for package: python-mako-0.8.1-2.el7.noarch
[node1][DEBUG ] ---> Package python-markupsafe.x86_64 0:0.11-10.el7 will be installed
[node1][DEBUG ] ---> Package python-simplegeneric.noarch 0:0.8-7.el7 will be installed
[node1][DEBUG ] ---> Package python-webob.noarch 0:1.2.3-7.el7 will be installed
[node1][DEBUG ] ---> Package python-webtest.noarch 0:1.3.4-6.el7 will be installed
[node1][DEBUG ] ---> Package python2-singledispatch.noarch 0:3.4.0.3-4.el7 will be installed
[node1][DEBUG ] --> Running transaction check
[node1][DEBUG ] ---> Package python-beaker.noarch 0:1.5.4-10.el7 will be installed
[node1][DEBUG ] --> Processing Dependency: python-paste for package: python-beaker-1.5.4-10.el7.noarch
[node1][DEBUG ] --> Running transaction check
[node1][DEBUG ] ---> Package python-paste.noarch 0:1.7.5.1-9.20111221hg1498.el7 will be installed
[node1][DEBUG ] --> Processing Dependency: python-tempita for package: python-paste-1.7.5.1-9.20111221hg1498.el7.noarch
[node1][DEBUG ] --> Running transaction check
[node1][DEBUG ] ---> Package python-tempita.noarch 0:0.5.1-6.el7 will be installed
[node1][DEBUG ] --> Finished Dependency Resolution
[node1][DEBUG ]
[node1][DEBUG ] Dependencies Resolved
[node1][DEBUG ]
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] Package Arch Version Repository Size
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] Installing:
[node1][DEBUG ] ceph x86_64 2:12.2.11-0.el7 centos-ceph-luminous 2.5 k
[node1][DEBUG ] ceph-radosgw x86_64 2:12.2.11-0.el7 centos-ceph-luminous 4.0 M
[node1][DEBUG ] Installing for dependencies:
[node1][DEBUG ] ceph-base x86_64 2:12.2.11-0.el7 centos-ceph-luminous 4.0 M
[node1][DEBUG ] ceph-mds x86_64 2:12.2.11-0.el7 centos-ceph-luminous 3.7 M
[node1][DEBUG ] ceph-mgr x86_64 2:12.2.11-0.el7 centos-ceph-luminous 3.6 M
[node1][DEBUG ] ceph-mon x86_64 2:12.2.11-0.el7 centos-ceph-luminous 5.1 M
[node1][DEBUG ] ceph-osd x86_64 2:12.2.11-0.el7 centos-ceph-luminous 13 M
[node1][DEBUG ] ceph-selinux x86_64 2:12.2.11-0.el7 centos-ceph-luminous 21 k
[node1][DEBUG ] mailcap noarch 2.1.41-2.el7 base 31 k
[node1][DEBUG ] pyOpenSSL x86_64 0.13.1-4.el7 base 135 k
[node1][DEBUG ] python-babel noarch 0.9.6-8.el7 base 1.4 M
[node1][DEBUG ] python-beaker noarch 1.5.4-10.el7 base 80 k
[node1][DEBUG ] python-cherrypy noarch 3.2.2-4.el7 base 422 k
[node1][DEBUG ] python-flask noarch 1:0.10.1-4.el7 extras 204 k
[node1][DEBUG ] python-itsdangerous noarch 0.23-2.el7 extras 24 k
[node1][DEBUG ] python-jinja2 noarch 2.7.2-3.el7_6 updates 518 k
[node1][DEBUG ] python-logutils noarch 0.3.3-3.el7 centos-ceph-luminous 42 k
[node1][DEBUG ] python-mako noarch 0.8.1-2.el7 base 307 k
[node1][DEBUG ] python-markupsafe x86_64 0.11-10.el7 base 25 k
[node1][DEBUG ] python-paste noarch 1.7.5.1-9.20111221hg1498.el7
[node1][DEBUG ] base 866 k
[node1][DEBUG ] python-simplegeneric noarch 0.8-7.el7 centos-ceph-luminous 12 k
[node1][DEBUG ] python-tempita noarch 0.5.1-6.el7 base 33 k
[node1][DEBUG ] python-webob noarch 1.2.3-7.el7 base 202 k
[node1][DEBUG ] python-webtest noarch 1.3.4-6.el7 base 102 k
[node1][DEBUG ] python-werkzeug noarch 0.9.1-2.el7 extras 562 k
[node1][DEBUG ] python2-pecan noarch 1.1.2-1.el7 centos-ceph-luminous 268 k
[node1][DEBUG ] python2-singledispatch noarch 3.4.0.3-4.el7 centos-ceph-luminous 18 k
[node1][DEBUG ]
[node1][DEBUG ] Transaction Summary
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] Install 2 Packages (+25 Dependent packages)
[node1][DEBUG ]
[node1][DEBUG ] Total download size: 38 M
[node1][DEBUG ] Installed size: 137 M
[node1][DEBUG ] Downloading packages:
[node1][DEBUG ] --------------------------------------------------------------------------------
[node1][DEBUG ] Total 179 kB/s | 38 MB 03:39
[node1][DEBUG ] Running transaction check
[node1][DEBUG ] Running transaction test
[node1][DEBUG ] Transaction test succeeded
[node1][DEBUG ] Running transaction
[node1][DEBUG ] Installing : 2:ceph-base-12.2.11-0.el7.x86_64 1/27
[node1][DEBUG ] Installing : 2:ceph-selinux-12.2.11-0.el7.x86_64 2/27
[node1][DEBUG ] Installing : pyOpenSSL-0.13.1-4.el7.x86_64 3/27
[node1][DEBUG ] Installing : python-webob-1.2.3-7.el7.noarch 4/27
[node1][DEBUG ] Installing : python-markupsafe-0.11-10.el7.x86_64 5/27
[node1][DEBUG ] Installing : python-werkzeug-0.9.1-2.el7.noarch 6/27
[node1][DEBUG ] Installing : python-webtest-1.3.4-6.el7.noarch 7/27
[node1][DEBUG ] Installing : 2:ceph-mds-12.2.11-0.el7.x86_64 8/27
[node1][DEBUG ] Installing : 2:ceph-osd-12.2.11-0.el7.x86_64 9/27
[node1][DEBUG ] Installing : python-tempita-0.5.1-6.el7.noarch 10/27
[node1][DEBUG ] Installing : python-paste-1.7.5.1-9.20111221hg1498.el7.noarch 11/27
[node1][DEBUG ] Installing : python-beaker-1.5.4-10.el7.noarch 12/27
[node1][DEBUG ] Installing : python-mako-0.8.1-2.el7.noarch 13/27
[node1][DEBUG ] Installing : python-cherrypy-3.2.2-4.el7.noarch 14/27
[node1][DEBUG ] Installing : python-babel-0.9.6-8.el7.noarch 15/27
[node1][DEBUG ] Installing : python-jinja2-2.7.2-3.el7_6.noarch 16/27
[node1][DEBUG ] Installing : python-itsdangerous-0.23-2.el7.noarch 17/27
[node1][DEBUG ] Installing : 1:python-flask-0.10.1-4.el7.noarch 18/27
[node1][DEBUG ] Installing : 2:ceph-mon-12.2.11-0.el7.x86_64 19/27
[node1][DEBUG ] Installing : python-logutils-0.3.3-3.el7.noarch 20/27
[node1][DEBUG ] Installing : mailcap-2.1.41-2.el7.noarch 21/27
[node1][DEBUG ] Installing : python2-singledispatch-3.4.0.3-4.el7.noarch 22/27
[node1][DEBUG ] Installing : python-simplegeneric-0.8-7.el7.noarch 23/27
[node1][DEBUG ] Installing : python2-pecan-1.1.2-1.el7.noarch 24/27
[node1][DEBUG ] Installing : 2:ceph-mgr-12.2.11-0.el7.x86_64 25/27
[node1][DEBUG ] Installing : 2:ceph-12.2.11-0.el7.x86_64 26/27
[node1][DEBUG ] Installing : 2:ceph-radosgw-12.2.11-0.el7.x86_64 27/27
[node1][DEBUG ] Verifying : 1:python-flask-0.10.1-4.el7.noarch 1/27
[node1][DEBUG ] Verifying : python-simplegeneric-0.8-7.el7.noarch 2/27
[node1][DEBUG ] Verifying : python2-singledispatch-3.4.0.3-4.el7.noarch 3/27
[node1][DEBUG ] Verifying : mailcap-2.1.41-2.el7.noarch 4/27
[node1][DEBUG ] Verifying : 2:ceph-mon-12.2.11-0.el7.x86_64 5/27
[node1][DEBUG ] Verifying : python-logutils-0.3.3-3.el7.noarch 6/27
[node1][DEBUG ] Verifying : python-mako-0.8.1-2.el7.noarch 7/27
[node1][DEBUG ] Verifying : 2:ceph-12.2.11-0.el7.x86_64 8/27
[node1][DEBUG ] Verifying : python-beaker-1.5.4-10.el7.noarch 9/27
[node1][DEBUG ] Verifying : python-itsdangerous-0.23-2.el7.noarch 10/27
[node1][DEBUG ] Verifying : python-jinja2-2.7.2-3.el7_6.noarch 11/27
[node1][DEBUG ] Verifying : 2:ceph-mds-12.2.11-0.el7.x86_64 12/27
[node1][DEBUG ] Verifying : python-werkzeug-0.9.1-2.el7.noarch 13/27
[node1][DEBUG ] Verifying : python-markupsafe-0.11-10.el7.x86_64 14/27
[node1][DEBUG ] Verifying : python2-pecan-1.1.2-1.el7.noarch 15/27
[node1][DEBUG ] Verifying : python-babel-0.9.6-8.el7.noarch 16/27
[node1][DEBUG ] Verifying : python-paste-1.7.5.1-9.20111221hg1498.el7.noarch 17/27
[node1][DEBUG ] Verifying : python-webob-1.2.3-7.el7.noarch 18/27
[node1][DEBUG ] Verifying : pyOpenSSL-0.13.1-4.el7.x86_64 19/27
[node1][DEBUG ] Verifying : 2:ceph-base-12.2.11-0.el7.x86_64 20/27
[node1][DEBUG ] Verifying : python-cherrypy-3.2.2-4.el7.noarch 21/27
[node1][DEBUG ] Verifying : 2:ceph-mgr-12.2.11-0.el7.x86_64 22/27
[node1][DEBUG ] Verifying : python-tempita-0.5.1-6.el7.noarch 23/27
[node1][DEBUG ] Verifying : 2:ceph-osd-12.2.11-0.el7.x86_64 24/27
[node1][DEBUG ] Verifying : python-webtest-1.3.4-6.el7.noarch 25/27
[node1][DEBUG ] Verifying : 2:ceph-radosgw-12.2.11-0.el7.x86_64 26/27
[node1][DEBUG ] Verifying : 2:ceph-selinux-12.2.11-0.el7.x86_64 27/27
[node1][DEBUG ]
[node1][DEBUG ] Installed:
[node1][DEBUG ] ceph.x86_64 2:12.2.11-0.el7 ceph-radosgw.x86_64 2:12.2.11-0.el7
[node1][DEBUG ]
[node1][DEBUG ] Dependency Installed:
[node1][DEBUG ] ceph-base.x86_64 2:12.2.11-0.el7
[node1][DEBUG ] ceph-mds.x86_64 2:12.2.11-0.el7
[node1][DEBUG ] ceph-mgr.x86_64 2:12.2.11-0.el7
[node1][DEBUG ] ceph-mon.x86_64 2:12.2.11-0.el7
[node1][DEBUG ] ceph-osd.x86_64 2:12.2.11-0.el7
[node1][DEBUG ] ceph-selinux.x86_64 2:12.2.11-0.el7
[node1][DEBUG ] mailcap.noarch 0:2.1.41-2.el7
[node1][DEBUG ] pyOpenSSL.x86_64 0:0.13.1-4.el7
[node1][DEBUG ] python-babel.noarch 0:0.9.6-8.el7
[node1][DEBUG ] python-beaker.noarch 0:1.5.4-10.el7
[node1][DEBUG ] python-cherrypy.noarch 0:3.2.2-4.el7
[node1][DEBUG ] python-flask.noarch 1:0.10.1-4.el7
[node1][DEBUG ] python-itsdangerous.noarch 0:0.23-2.el7
[node1][DEBUG ] python-jinja2.noarch 0:2.7.2-3.el7_6
[node1][DEBUG ] python-logutils.noarch 0:0.3.3-3.el7
[node1][DEBUG ] python-mako.noarch 0:0.8.1-2.el7
[node1][DEBUG ] python-markupsafe.x86_64 0:0.11-10.el7
[node1][DEBUG ] python-paste.noarch 0:1.7.5.1-9.20111221hg1498.el7
[node1][DEBUG ] python-simplegeneric.noarch 0:0.8-7.el7
[node1][DEBUG ] python-tempita.noarch 0:0.5.1-6.el7
[node1][DEBUG ] python-webob.noarch 0:1.2.3-7.el7
[node1][DEBUG ] python-webtest.noarch 0:1.3.4-6.el7
[node1][DEBUG ] python-werkzeug.noarch 0:0.9.1-2.el7
[node1][DEBUG ] python2-pecan.noarch 0:1.1.2-1.el7
[node1][DEBUG ] python2-singledispatch.noarch 0:3.4.0.3-4.el7
[node1][DEBUG ]
[node1][DEBUG ] Complete!
[node1][INFO ] Running command: sudo ceph --version
[node1][DEBUG ] ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)
[ceph_deploy.install][DEBUG ] Detecting platform for host node2 ...
[node2][DEBUG ] connection detected need for sudo
...
[node2][DEBUG ] Complete!
[node2][INFO ] Running command: sudo ceph --version
[node2][DEBUG ] ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)
[ceph_deploy.install][DEBUG ] Detecting platform for host node3 ...
[node3][DEBUG ] connection detected need for sudo
...
[node3][DEBUG ] Complete!
[node3][INFO ] Running command: sudo ceph --version
[node3][DEBUG ] ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)
yjiang2@admin-node:~$
5. 配置初始 monitor(s)、並收集所有密鑰
$ ceph-deploy mon create-initial
完成上述操作後,當前目錄裏應該會出現這些密鑰環:
{cluster-name}.client.admin.keyring
{cluster-name}.bootstrap-osd.keyring
{cluster-name}.bootstrap-mds.keyring
{cluster-name}.bootstrap-rgw.keyring
6. 添加2個OSD
1)SSH 登錄到 Ceph 節點(node2/node3)、並給 OSD 守護進程創建一個目錄和添加權限。或者直接用磁盤的方式。
注意:node2 創建的是osd0,node3創建的是osd1
$ ssh node2
$ sudo mkdir /var/local/osd0
$ sudo chmod 777 /var/local/osd0/
$ exit
$ ssh node3
$ sudo mkdir /var/local/osd1
$ sudo chmod 777 /var/local/osd1/
$ exit
用磁盤方式:
命令格式:ceph-deploy disk zap {osd-server-name}:{disk-name},該命令擦除磁盤的分區表及其內容。
例如:ceph-deploy disk zap node1:sdb
2) 然後,從管理節點(admin-node)執行 ceph-deploy 來準備 OSD 。
$ ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
用磁盤方式:
命令格式:ceph-deploy osd prepare {node-name}:{data-disk}
例如:ceph-deploy osd prepare node1:sdb1:sdc
3) 最後,激活 OSD 。
$ ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
用磁盤方式:
ceph-deploy osd activate {node-name}:{data-disk-name}
例如:ceph-deploy osd activate node1:sdb1:sdc
7.把配置文件和 admin 密鑰拷貝到管理節點(admin-node)和 Ceph 節點(node1/2/3)的/etc/ceph/目錄下
$ ceph-deploy admin admin-node node1 node2 node3
如果提示[ERROR] exists with different content
[ceph_deploy.admin][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite
使用 --overwrite-conf 選項:
$ ceph-deploy --overwrite-conf admin admin-node node1 node2 node3
8. 確保你對 ceph.client.admin.keyring 有正確的操作權限
$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
9. 在node1上創建mgr節點
luminous版本增加了mgr 節點,運行命令ceph-deploy mgr create node1創建。如果沒有創建mgr節點,運行ceph health會提示“HEALTH_WARN no active mgr”
$ ceph health
HEALTH_WARN no active mgr
10. 檢查集羣的健康狀況和OSD節點狀況
ps -axu|grep ceph 檢查各個node的進程:
//Monitor
ceph 4111 0.0 2.3 406772 35944 ? Ssl 21:07 0:00 /usr/bin/ceph-mon -f --cluster ceph --id node1 --setuser ceph --setgroup ceph
ceph 7931 0.3 3.4 640680 53132 ? Ssl 02:28 0:12 /usr/bin/ceph-mgr -f --cluster ceph --id node1 --setuser ceph --setgroup ceph
//OSD0
ceph 14452 0.2 4.6 798004 39660 ? Ssl 21:19 0:01 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
//OSD1
ceph 14555 0.2 2.7 798004 41492 ? Ssl Jun12 0:39 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
運行ceph -s查看集羣狀態。
yjiang2@admin-node:~$ ceph health
HEALTH_OK
yjiang2@admin-node:~$ ceph -s
cluster:
id: 027d5b3c-e011-4a92-9449-c8755cd8f500
health: HEALTH_OK
services:
mon: 1 daemons, quorum node1
mgr: node1(active)
osd: 2 osds: 2 up, 2 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0B
usage: 2.00GiB used, 18.0GiB / 20GiB avail
pgs:
後續:Ceph集羣搭建系列(二):Ceph 集羣擴容
參考: