MFS 高可用存儲分佈式文件系統

系統:redhat

機器:192.168.1.248(Master)

            192.168.1.249(Backup)

           192.168.1.250(Chunkserver 1)

           192.168.1.238(Chunkserver2)

          192.168.1.251 (client)

   Master安裝MFS:

配置之前確保5臺機器的selinux關閉,iptables關閉
# useradd mfs
# tar zvxf mfs-1.6.17.tar.gz
# cd mfs-1.6.17
# ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount
# make
# make install
# cd /usr/local/mfs/etc
# mv mfsexports.cfg.dist mfsexports.cfg
# mv mfsmaster.cfg.dist mfsmaster.cfg
# mv mfsmetalogger.cfg.dist mfsmetalogger.cfg
# cd /usr/local/mfs/var/mfs/
# mv metadata.mfs.empty metadata.mfs
# echo "192.168.1.248    mfsmaster" >> /etc/hosts
 
Mfsmaster.cfg 配置文件包含主控服務器master 相關的設置
mfsexports.cfg 指定那些客戶端主機可以遠程掛接MooseFS 文件系統,以及授予
掛接客戶端什麼樣的訪問權。默認是所有主機共享/
 
試着運行master 服務(服務將以安裝配置configure 指定的用戶運行mfs):
# /usr/local/mfs/sbin/mfsmaster start
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmaster modules ...
loading sessions ... ok
sessions file has been loaded
exports file has been loaded
loading metadata ...
create new empty filesystemmetadata file has been loaded
no charts data file - initializing empty charts
master <-> metaloggers module: listen on *:9419
master <-> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly
 
爲了監控MooseFS 當前運行狀態,我們可以運行CGI 監控服務,這樣就可以用瀏覽器查看整個MooseFS 的運行情況:
# /usr/local/mfs/sbin/mfscgiserv
starting simple cgi server (host: any , port: 9425 , rootpath: /usr/local/mfs/share/mfscgi)
現在,我們在瀏覽器地址欄輸入http://192.168.1.248:9425 即可查看master 的運行情況(這個時候,是不能看見chunk server 的數據)。
 
Backup服務器配置 (作用是故障了替代Master):
# useradd mfs
# tar zvxf mfs-1.6.17.tar.gz
# cd mfs-1.6.17
# ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount
# make
# make install
# cd /usr/local/mfs/etc
# cp mfsmetalogger.cfg.dist mfsmetalogger.cfg
# cp mfsexports.cfg.dist mfsexports.cfg
# cp mfsmaster.cfg.dist mfsmaster.cfg
# echo "192.168.1.248    mfsmaster" >> /etc/hosts
 
# /usr/local/mfs/sbin/mfsmetalogger start
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmetalogger modules ...
mfsmetalogger daemon initialized properly
 
Chunkserver 服務器配置(存儲數據塊,每臺Chunkserver配置都一樣):
# useradd mfs
# tar zvxf mfs-1.6.17.tar.gz
# cd mfs-1.6.17
# ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster
# make
# make install
# cd /usr/local/mfs/etc
# cp mfschunkserver.cfg.dist mfschunkserver.cfg
# cp mfshdd.cfg.dist mfshdd.cfg
# echo "192.168.1.248    mfsmaster" >> /etc/hosts
 
 
建議在chunk server 上劃分單獨的空間給 MooseFS 使用,這樣做的好處是便於管理剩餘空間,這裏使用的共享點是/mfs1和/mfs2
在配置文件mfshdd.cfg 中,我們給出了用於客戶端掛接MooseFS 分佈式文件系統根分區所使用的共享空間位置
# vi /usr/local/mfs/etc/mfshdd.cfg
#加入下面2行
/mfs1
/mfs2
 
# chown -R mfs:mfs /mfs*
 
開始啓動chunk server
# /usr/local/mfs/sbin/mfschunkserver start
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfschunkserver modules ...
hdd space manager: scanning folder /mfs2/ ...
hdd space manager: scanning folder /mfs1/ ...
hdd space manager: /mfs1/: 0 chunks found
hdd space manager: /mfs2/: 0 chunks found
hdd space manager: scanning complete
main server module: listen on *:9422
no charts data file - initializing empty charts
mfschunkserver daemon initialized properly
 
現在再通過瀏覽器訪問 http://192.168.1.248:9425 可以看見這個MooseFS 系統的全部信息,包括主控master 和存儲服務chunkserver 。
 
 
client配置(客戶端掛載mfs共享目錄):
前提環境:
所有的client都需要安裝fuse,內核版本爲2.6.18-128.el5需要按照fuse-2.7.6.tar.gz,如果是2.6.18-194.11.3.el5內核則需要安裝fuse-2.8.4否則報錯)
在/etc/profile文件最後面加上:PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH
再執行 source /etc/profile 使之生效
tar xzvf fuse-2.7.6.tar.gz
cd fuse-2.7.6
./configure --enable-kernel-module
make;make install
如果安裝成功會找到/lib/modules/2.6.18-128.el5/kernel/fs/fuse/fuse.ko這個內核模塊
再執行modprobe fuse
查看是否加載成功 :lsmod|grep "fuse"
 
# useradd mfs
# tar zvxf mfs-1.6.17.tar.gz
# cd mfs-1.6.17
#./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster --disable-mfschunkserver
# make
# make install
# echo "192.168.1.248    mfsmaster" >> /etc/hosts
 
掛載操作
# mkdir -p /data/mfs
# /usr/local/mfs/bin/mfsmount /data/mfs -H mfsmaster
mfsmaster accepted connection with parameters: read-write,restricted_ip ; root mapped to root:root
 
進行副本建立,只要在1臺client上進行操作就可以了
# cd /data/mfs/
副本數爲1
# mkdir floder1
副本數爲2
# mkdir floder2
副本數爲3
# mkdir floder3
 
使用命令mfssetgoal –設定目錄裏文件的副本數:
# /usr/local/mfs/bin/mfssetgoal -r 1 /data/mfs/floder1
/data/mfs/floder1:
 inodes with goal changed:                         0
 inodes with goal not changed:                     1
 inodes with permission denied:                    0
# /usr/local/mfs/bin/mfssetgoal -r 2 /data/mfs/floder2
/data/mfs/floder2:
 inodes with goal changed:                         1
 inodes with goal not changed:                     0
 inodes with permission denied:                    0
# /usr/local/mfs/bin/mfssetgoal -r 3 /data/mfs/floder3
/data/mfs/floder3:
 inodes with goal changed:                         1
 inodes with goal not changed:                     0
 inodes with permission denied:                    0
 
拷貝文件測試
# cp /root/mfs-1.6.17.tar.gz /data/mfs/floder1
# cp /root/mfs-1.6.17.tar.gz /data/mfs/floder2
# cp /root/mfs-1.6.17.tar.gz /data/mfs/floder3
 
命令mfschunkfile 用來檢查給定的文件以多少副本數來存儲
目錄folder1 有一個副本存儲在一個chunk 裏:
# /usr/local/mfs/bin/mfscheckfile /data/mfs/floder1/mfs-1.6.17.tar.gz
/data/mfs/floder1/mfs-1.6.17.tar.gz:
1 copies: 1 chunks
在目錄folder2 中,是以兩個副本保存的
# /usr/local/mfs/bin/mfscheckfile /data/mfs/floder2/mfs-1.6.17.tar.gz 
/data/mfs/floder2/mfs-1.6.17.tar.gz:
2 copies: 1 chunks
在目錄folder3 中,是以三個副本保存的
# /usr/local/mfs/bin/mfscheckfile /data/mfs/floder3/mfs-1.6.17.tar.gz 
/data/mfs/floder3/mfs-1.6.17.tar.gz:
3 copies: 1 chunks
 
停止 MooseFS
爲了安全停止MooseFS 集羣,建議執行如下的步驟:
·在所有客戶端用Unmount 命令先卸載文件系統(本例將是: umount /data/mfs)
·停止chunk server 進程: /usr/local/mfs/sbin/mfschunkserver stop
·停止 metalogger 進程: /usr/local/mfs/sbin/mfsmetalogger stop
·停止主控 master server 進程: /usr/local/mfs/sbin/mfsmaster stop
 
 
 
注意說明:
1、定時備份/usr/local/mfs/var/mfs/metadata.mfs.back文件
 
失敗恢復
拷貝備份文件到 備份服務器Backup server (metalogger)
在備份服務器Backup server (metalogger) 操作
# /usr/local/mfs/sbin/mfsmetarestore -a
loading objects (files,directories,etc.) ... ok
loading names ... ok
loading deletion timestamps ... ok
checking filesystem consistency ... ok
loading chunks data ... ok
connecting files and chunks ... ok
applying changes from file: /usr/local/mfs/var/mfs/changelog_ml.1.mfs
meta data version: 23574
version after applying changelog: 23574
applying changes from file: /usr/local/mfs/var/mfs/changelog_ml.0.mfs
meta data version: 23574
version after applying changelog: 23583
store metadata into file: /usr/local/mfs/var/mfs/metadata.mfs
 
修改hosts文件,變更mfsmaster的ip
192.168.1.249    mfsmaster
 
啓動master
# /usr/local/mfs/sbin/mfsmaster start
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmaster modules ...
loading sessions ... ok
sessions file has been loaded
exports file has been loaded
loading metadata ...
loading objects (files,directories,etc.) ... ok
loading names ... ok
loading deletion timestamps ... ok
checking filesystem consistency ... ok
loading chunks data ... ok
connecting files and chunks ... ok
all inodes: 2381
directory inodes: 104
file inodes: 2277
chunks: 2185
metadata file has been loaded
no charts data file - initializing empty charts
master <-> metaloggers module: listen on *:9419
master <-> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly
 
 
所有客戶機執行
# umount /data/mfs
 
所有存儲節點停止mfschunkserver服務
#/usr/local/mfs/sbin/mfschunkserver stop
 
所有客戶機,chunkserver修改hosts文件,更改mfsmaster的ip
192.168.1.249    mfsmaster
 
所有存儲節點啓動服務
#/usr/local/mfs/sbin/mfschunkserver start
 
所有客戶機進行掛載
# /usr/local/mfs/bin/mfsmount /data/mfs -H mfsmaster
 
mfs會把所有chunkserver的共享目錄容量累積爲一個總的容量存儲空間,client看到的目錄大小就是所有chunkserver共享的目錄空間總和。
 
MFS Master和backup高可用這塊,可以用DRBD來解決Master單點故障,後面我會發布這塊的文檔。
掛載權限這塊可以在Master裏設置運行掛載的ip段(mfsexport.cfg)  也可以通過iptables控制。

恢復誤刪除文件:

/usr/local/mfs/bin/mfsmount /data/reco -H mfsmaster -p -m   (/data/reco建立目錄用於恢復刪除的文件,如果彈出passwod就輸入mfsmaster的機器密碼),進入/data/reco/trash

mv 00* ./undel/

再去掛載的目錄查看刪除的文件都已經恢復完成。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章