分佈式對象存儲系統在openstack中的應用研究--Ceph(二)

 

集羣安裝

這裏使用EXT4作爲集羣的文件系統,並且爲了方便測試關掉了ceph的安全認證。

安裝依賴包:

yum -y install gcc gcc-c++ make automake libtool expat expat-devel \

boost-devel nss-devel cryptopp cryptopp-devel libatomic_ops-devel \

fuse-devel gperftools-libs gperftools-devel libaio libaio-devel libedit libedit-devel libuuid-devel

源碼安裝:

下載源碼:http://ceph.com/download/ceph-0.48argonaut.tar.gz

解壓包進行安裝:

tar -zxvf ceph-0.48argonaut.tar.gz

cd ceph-0.48argonaut

./autogen.sh

CXXFLAGS="-g -O2" ./configure --prefix=/usr --sbindir=/sbin --localstatedir=/var --sysconfdir=/etc --without-tcmalloc

make && make install

rpm安裝方法:

#wget http://ceph.com/download/ceph-0.48argonaut.tar.bz2

#tar xjvf ceph-0.48argonaut.tar.bz2

#cp ceph-0.48argonaut.tar.bz2 ~/rpmbuild/SOURCES

#rpmbuild -ba ceph-0.48argonaut/ceph.spec

#cd  /root/rpmbuild/RPMS/x86_64/

#rpm  -Uvh 

修改配置文件ceph.conf

[global]

        ; enable secure authentication

        ;auth supported = cephx

 

        ; allow ourselves to open a lot of files

        max open files = 131072

 

        ; set log file

        log file = /var/log/ceph/$name.log

        ; log_to_syslog = true        ; uncomment this line to log to syslog

 

        ; set up pid files

        pid file = /var/run/ceph/$name.pid

 

        ; If you want to run a IPv6 cluster, set this to true. Dual-stack isn't possible

        ;ms bind ipv6 = true

 

[mon]

        mon data = /ceph/mon_data/$name

        debug ms = 1

        debug mon = 20

        debug paxos = 20

        debug auth = 20

 

[mon.0]

        host = node89

        mon addr = 1.1.1.89:6789

 

[mon.1]

        host = node97

        mon addr = 1.1.1.97:6789

 

[mon.2]

        host = node56

        mon addr = 1.1.1.56:6789

 

; mds

;  You need at least one.  Define two to get a standby.

[mds]

        ; where the mds keeps it's secret encryption keys

        keyring = /ceph/mds_data/keyring.$name

 

        ; mds logging to debug issues.

        debug ms = 1

        debug mds = 20

 

[mds.0]

        host = node89

 

[mds.1]

        host = node97

 

[mds.2]

        host = node56

 

[osd]

        osd data = /ceph/osd_data/$name

        filestore xattr use omap = true

        osd journal = /ceph/osd_data/$name/journal

        osd journal size = 1000 ; journal size, in megabytes

        journal dio = false

        debug ms = 1

        debug osd = 20

        debug filestore = 20

        debug journal = 20

        filestore fiemap = false

        osd class dir = /usr/lib/rados-classes

        keyring = /etc/ceph/keyring.$name

 

[osd.0]

        host = node89   

        devs = /dev/mapper/vg_control-lv_home

 

[osd.1]

        host = node97

        devs = /dev/mapper/vg_node2-lv_home

 

[osd.2]

        host = node56

        devs = /dev/mapper/vg_node56-lv_home

安裝腳本:

#!/bin/bash

yum -y install gcc gcc-c++ make automake libtool expat expat-devel \

boost-devel nss-devel cryptopp cryptopp-devel libatomic_ops-devel \

fuse-devel gperftools-libs gperftools-devel libaio libaio-devel libedit libedit-devel libuuid-devel

which ceph > /dev/null 2>&1

if [ $? -eq 1 ]; then

    tar -zxvf ceph-0.48argonaut.tar.gz

    cd ceph-0.48argonaut

    ./autogen.sh

    CXXFLAGS="-g -O2" ./configure --prefix=/usr --sbindir=/sbin --localstatedir=/var --sysconfdir=/etc --without-tcmalloc

    make && make install

    cd ..

    rm -rf ceph-0.48argonaut

fi

echo "#####################Configure#########################"

rm -rf /ceph/*

rm -rf /etc/ceph/*

mkdir -p /ceph/mon_data/{mon.0,mon.1,mon.2}

mkdir -p /ceph/osd_data/{osd.0,osd.1,osd.2}

mkdir -p /ceph/mds_data

touch /etc/ceph/keyring

touch /etc/ceph/ceph.keyring

touch /etc/ceph/keyring.bin

cp ceph.conf /etc/ceph/

echo "#####################Iptables##########################"

grep 6789 /etc/sysconfig/iptables

if [ $? -eq 1 ];then

  iptables -A INPUT -m multiport -p tcp --dports 6789,6800:6810 -j ACCEPT

  service iptables save

  service iptables restart

fi

echo "######################Init service#####################"

#mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.admin

#service ceph restart

echo "Install Ceph Successful!"

在監控節點上初始化集羣(需要多次輸入節點登陸密碼,可以改爲ssh認證):

mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.admin

執行此命令後會在osdmon文件下生成相關文件,如果沒有生成則說明初始化沒有成功,啓動ceph服務時會報錯,注意,如果使用btrfs格式時,不需要手動掛載osd文件,如果使用ext4文件格式,需要手動掛載osd

在所有節點上啓動服務:

service ceph restart

檢測集羣運行狀態:

#ceph health detail

#ceph –s

#ceph osd tree

#ceph osd dump

集羣操作

這裏在已有的邏輯卷組上創建一個新的邏輯分區用來做實驗:

#vgs

VG         #PV #LV #SN Attr   VSize   VFree 

  vg_control   1   4   0 wz--n- 931.02g 424.83g

#lvcreate --size 10g --name ceph_test  vg_control

#mkfs.ext4 /dev/mapper/ vg_control-ceph_test

#lvs

1. 擴張節點步驟:

ceph osd create

ceph.conf配置文件添加:

[osd.3]

    host = newnode

    devs = /dev/mapper/vg_control-ceph_test

格式化、掛載osd

#mkfs.ext4 /dev/mapper/ vg_control-ceph_test

#mkdir  /ceph/osd_data/osd.3

#mount  -o  user_xattr  /dev/mapper/vg_control-ceph_test  /ceph/osd_data/osd.3

這樣,新的osd就添加成功了,但是如果要正常使用它,就必須添加映射關係(用來調度osd的關係):

#ceph osd crush set 3 osd.3 1.0 pool=default host=newnode

# ceph-osd -c /etc/ceph/ceph.conf --monmap /tmp/monmap -i 3 -–mkfs

2. 刪除osd.3節點:

#ceph osd crush remove osd.3

#ceph osd tree

dumped osdmap tree epoch 21

# id    weight  type name       up/down reweight

-1      3       pool default

-3      3               rack unknownrack

-2      1                       host node89

0       1                               osd.0   up      1

-4      1                       host node97

1       1                               osd.1   up      1

-5      1                       host node56

2       1                               osd.2   up      1

 

3       0       osd.3   down    0

#ceph osd rm 3

#ceph osd tree

dumped osdmap tree epoch 22

# id    weight  type name       up/down reweight

-1      3       pool default

-3      3               rack unknownrack

-2      1                       host node89

0       1                               osd.0   up      1

-4      1                       host node97

1       1                               osd.1   up      1

-5      1                       host node56

2       1                               osd.2   up      1

#rm -r /ceph/osd_data/osd.3/

   

修改ceph.conf,刪除osd.3相關的內容。

3. osd.0掛載新盤

#service ceph stop osd

#umount /ceph/osd_data/osd.0

# mkfs.ext4 /dev/mapper/vg_control-ceph_test

# tune2fs -o journal_data_writeback /dev/mapper/vg_control-ceph_test

# mount -o rw,noexec,nodev,noatime,nodiratime,user_xattr,data=writeback,barrier=0 /dev/mapper/vg_control-ceph_test /ceph/osd_data/osd.0

fstab中添加如下:

/dev/mapper/vg_control-ceph_test /ceph/osd_data/osd.0 ext4 rw,noexec,nodev,noatime,nodiratime,user_xattr,data=writeback,barrier=0 0 0

加入監控:

#mount -a

#ceph mon getmap -o /tmp/monmap

#ceph-osd -c /etc/ceph/ceph.conf --monmap /tmp/monmap -i 0 -–mkfs

#service ceph start osd

4. 客戶端掛載

在裝有ceph-clietn的客戶端進行掛載ceph集羣的osd

#mount –t ceph 1.1.1.89:6789:/ /mnt

#df –h

1.1.1.89:6789:/   100G  3.1G   97G   4% /mnt

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章