centos 7.7单机安装ceph

服务器/虚拟机准备
1.配置网络,保证机器可以使用公网网络源
2.额外添加几个盘,作为数据盘,此处我加了/dev/vdb /dev/vdc 两块10G的盘
系统信息

[root@localhost ceph-cluster]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)```

准备源
添加centos源和epel源,本文全部使用阿里云的源

[root@localhost ~] rm /etc/yum.repos.d/* -rf
[root@localhost ~] curl http://mirrors.aliyun.com/repo/Centos-7.repo > /etc/yum.repos.d/Centos-7.repo 
[root@localhost ~] curl http://mirrors.aliyun.com/repo/epel-7.repo > /etc/yum.repos.d/epel.repo

添加ceph源,在/etc/yum.repos.d/目录下创建ceph.repo文件,粘贴以下内容

[Ceph-SRPMS]
name=Ceph SRPMS packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS/
enabled=1
gpgcheck=0
type=rpm-md
 
[Ceph-aarch64]
name=Ceph aarch64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/aarch64/
enabled=1
gpgcheck=0
type=rpm-md
 
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
enabled=1
gpgcheck=0
type=rpm-md
 
[Ceph-x86_64]
name=Ceph x86_64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
enabled=1
gpgcheck=0
type=rpm-md

更新源

[root@dev179 ~] yum clean all
[root@dev179 ~] yum makecache

服务器配置
// 关闭selinux

[root@localhost ~] sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@localhost ~] setenforce 0

// 关闭防火墙

[root@localhost ~] systemctl stop firewalld 
[root@localhost ~] systemctl disable firewalld

// 设置服务器ip到/etc/hosts,把192.168.172.171改为服务器的IP地址

[root@localhost ~] echo 192.168.172.171 $HOSTNAME >> /etc/hosts

// 准备临时部署目录

[root@localhost ~] rm -rf /root/ceph-cluster && mkdir -p /root/ceph-cluster && cd /root/ceph-cluster

部署
// 安装部署软件

[root@localhost ceph-cluster] yum install ceph ceph-radosgw ceph-deploy -y

// 初始化配置,后面的步骤都必须进入/root/ceph-cluster目录中再执行

[root@localhost ceph-cluster] ceph-deploy new $HOSTNAME

// 更新默认配置文件

[root@localhost ceph-cluster] echo osd pool default size = 1 >> ceph.conf
[root@localhost ceph-cluster] echo osd crush chooseleaf type = 0 >> ceph.conf
[root@localhost ceph-cluster] echo osd max object name len = 256 >> ceph.conf
[root@localhost ceph-cluster] echo osd journal size = 128 >> ceph.conf

// 初始化监控节点

[root@localhostceph-cluster] ceph-deploy mon create-initial

// 准备磁盘,此处/dev/vdb /dev/vdc /dev/vdd根据实际情况修改

[root@localhost ceph-cluster] ceph-deploy osd prepare $HOSTNAME:/dev/vdb $HOSTNAME:/dev/vdc $HOSTNAME:/dev/vdd

上步prepare后,那3个盘会自动创建文件系统并挂载,通过df查看具体挂载目录

// 通过df查看具体挂载目录

[root@localhost ceph-cluster]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
devtmpfs                devtmpfs  470M     0  470M   0% /dev
tmpfs                   tmpfs     487M     0  487M   0% /dev/shm
tmpfs                   tmpfs     487M  8.4M  478M   2% /run
tmpfs                   tmpfs     487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos-root xfs        17G  4.4G   13G  26% /
/dev/sda1               xfs      1014M  171M  844M  17% /boot
tmpfs                   tmpfs      98M     0   98M   0% /run/user/0
tmpfs                   tmpfs      98M   12K   98M   1% /run/user/42
/dev/sdb1               xfs       9.9G  108M  9.8G   2% /var/lib/ceph/osd/ceph-0
/dev/sdc1               xfs       9.9G  108M  9.8G   2% /var/lib/ceph/osd/ceph-1

// 激活osd

[root@localhost ceph-cluster]# ceph-deploy osd activate $HOSTNAME:/var/lib/ceph/osd/ceph-0 $HOSTNAME:/var/lib/ceph/osd/ceph-1

检查

[root@localhost ceph-cluster]# ceph -s
    cluster 07f9b419-f68a-45c9-85a0-2a59447ce7ab
     health HEALTH_OK
     monmap e1: 1 mons at {localhost=[::1]:6789/0}
            election epoch 3, quorum 0 localhost
     osdmap e10: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v18: 64 pgs, 1 pools, 0 bytes data, 0 objects
            214 MB used, 19987 MB / 20201 MB avail
                  64 active+clean
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章