我用的redhat6.4
安裝glusterfs直接yum
# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
Server端安裝:
#yum install -y glusterfs glusterfs-server glusterfs-fuse
Client端安裝:
#yum install -y glusterfs glusterfs-server
完事了,簡單吧。。。。。。
先說說原本的架構是什麼樣的,我們是圖片服務器,存儲的都是圖片。兩臺server共享同一個目錄,如s1和s2兩臺機器,兩臺glusterfs配置一樣,都提供/var/test/目錄共享,多個client掛載s1和s2共享的目錄,如client掛載到本地的/test目錄,這樣,在client端往/test目錄寫數據的時候,就會寫到兩臺server,兩個server內容相同,起互備作用,防止硬盤壞掉。當然,每天也會自動把數據備份到另一臺備份機上。
現在新加了個項目,需要共享存儲。就是glusterfs的server還用s1和s2,但是要新建個目錄,假設爲/newtest吧。
直接上配置文件
Server:
#vim /etc/glusterfs/glusterd.vol
volume brick
type storage/posix
option directory /var/test/
end-volume
volume locker
type features/posix-locks
subvolumes brick
end-volume
volume server
type protocol/server
option transport-type tcp/server
option listen-port 24000
subvolumes locker
option auth.addr.brick.allow *
option auth.addr.locker.allow *
end-volume
volume brick1
type storage/posix
option directory /var/newtest/
end-volume
volume locker1
type features/posix-locks
subvolumes brick1
end-volume
volume server1
type protocol/server
option transport-type tcp/server
option listen-port 24001
subvolumes locker1
option auth.addr.brick1.allow *
option auth.addr.locker1.allow *
end-volume
啓動服務:
#/etc/init.d/glusterd restart
注:首先s1和s2上要先有/var/test和/var/newtest目錄,啓動後查看下上面共享的兩個端口啓動沒有,s1和s2上是完全一樣的
Client:
# vim /etc/glusterfs/photo.vol
volume client
type protocol/client
option transport-type tcp/client
option remote-host x.x.x.x #s1的ip
option transport.socket.remote-port 24000
option remote-subvolume locker
end-volume
volume client2
type protocol/client
option transport-type tcp/client
option remote-host x.x.x.x #s2的ip
option transport.socket.remote-port 24000
option remote-subvolume locker
end-volume
volume bricks
type cluster/replicate
subvolumes client1 client2
end-volume
### Add IO-Cache feature
volume iocache
type performance/io-cache
option page-size 8MB
option page-count 2
subvolumes bricks
end-volume
### Add writeback feature
volume writeback
type performance/write-behind
option aggregate-size 8MB
option window-size 8MB
option flush-behind off
subvolumes iocache
end-volume
掛載: glusterfs -f /etc/glusterfs/photo.vol -l /tmp/photo.log /test
在/test裏面創建文件或目錄,就可以在s1和s2上的/var/test目錄裏也生成同樣的數據了
下面配置新的目錄
New-Clinet:
# vim /etc/glusterfs/photo1.vol
volume client1
type protocol/client
option transport-type tcp/client
option remote-host x.x.x.x #s1的ip
option transport.socket.remote-port 24001
option remote-subvolume locker1
end-volume
volume client2
type protocol/client
option transport-type tcp/client
option remote-host x.x.x.x #s2的ip
option transport.socket.remote-port 24001
option remote-subvolume locker1
end-volume
volume bricks
type cluster/replicate
subvolumes client1 client2
end-volume
### Add IO-Cache feature
volume iocache
type performance/io-cache
option page-size 8MB
option page-count 2
subvolumes bricks
end-volume
### Add writeback feature
volume writeback
type performance/write-behind
option aggregate-size 8MB
option window-size 8MB
option flush-behind off
subvolumes iocache
end-volume
掛載: glusterfs -f /etc/glusterfs/photo1.vol -l /tmp/photo1.log /newtest
在/newtest裏面創建文件或目錄,就可以在s1和s2上的/var/newtest目錄裏也生成同樣的數據了