高可用集羣corosync+pacemaker+drbd+httpd----手動配置篇
共享存儲高可用方案 -----DRBD
Drbd :Distributed Replicated Block Device
高可用集羣中的文件共享方案之一
共享存儲的常見實現方式
DAS:直接附加存儲 Direct attached storage:通過專用線纜直接連接至主板存儲控制器接口的設備成爲直接附加存儲.,如外置的raid陣列
並行口: IDE SCSI
兩種接口的區別:
ide接口的存取過程:
首先將從文件的讀取說起;當用戶空間進程要讀寫文件時首先向內核發起系統調用,然後進程有用戶模式切換至內核模式,內核調用驅動程序使用特定的協議去驅動硬件並將數據按block塊的方式讀入內核空間的緩衝區,當數據準備完成再將數據複製到用戶空間的內存中,然後通知內核空間進程數據準備完畢可以使用,此時有內核模式轉換爲用戶模式,在這個過程中,需要cpu參與數據的尋址,控制信號傳輸,內存尋址等等會消耗cpu時間
而scsi本身就叫做小型計算機系統接口,本身就含有cpu等硬件設備,可以節省主機cpu時間,系統當讀取文件時,系統直接將scsi協議的數據直接交由scsi卡去處理就行這樣可以大大節省cpu時間佔用,提高效率.這也是爲甚麼scsi接口的硬盤要比IDE接口硬盤貴的原因通常scsi接口的對cpu的佔用率大約爲IDE接口占用率的10%左右.在接口方面一塊icsi可以接7(窄帶)塊或15(寬帶)塊硬盤..其中窄帶總線接口有八個口其中一個接終結器.所謂“終結”就是在最後一個SCSI設備上設置一個跳線或安裝一個終結器,通知SCSI控制器SCSI總線到此處就結束了
時至今日scsi一個接口以經可以接一塊scsi卡而不僅僅是一塊盤,這樣就組成了scsi的存貯網絡 總線上的每一個口稱爲一個target每個口上磁盤被記作一個lun (邏輯單元號)
串行口SATA SAS USB
NAS:network attached storage:
在一個網絡內以服務的形式提供存儲.例如NFS,Samba等此時提供的存儲是文件系統級別的
SAN:Storage Area Network
由於SCSI協議的傳輸依賴於特定的傳輸線纜,爲了實現更遠距離的傳輸可以將ICSI協議封裝的數據,再以網絡傳輸協議進行第二次封裝藉助於網線進行數據傳輸,以提高傳輸距離另一端則有網絡接口卡接受下來,並理解scsi協議存入存儲設備,因此後端的存儲設備可以是任何形式,不侷限於scsi硬盤.這樣所提供的爲塊級別的存儲,因爲前端並不需要理解整個存儲的傳送過程.在其開來自己所連接的就是一塊scsi硬盤,可以對硬盤進行格式化等操作,所以此種方式提供的是塊級別的存儲
DRBD:原理圖
當數據到達緩存或緩衝層時數據會分流一份寫入本地磁盤,另一份將通過drbd設備發送至網絡,傳輸到遠端節點,遠端節點的drbd設備接受到數據後將數據寫入磁盤.在這個過程中
數據只發送至本地tcp/ip協議棧就認爲數據寫完成此時稱作Protocol A
數據發送至遠端的tcp/ip協議棧就認爲數據寫完成此時稱作Protocol B
數據直到存儲到遠端磁盤才認爲數據寫完成此時稱作Protocol C
工作模型爲
1,master/secondary:此模式下只有主節點可讀寫 從節點不能讀寫也不能掛載
2,master/master:雙主模型,此模式下要藉助高可用集羣文件系統實現讀寫
DRBD安裝
注意安裝包必須與系統的發行版本號(uname -r)一致
下載地址爲ftp://rpmfind.net/linux/atrpms
drbd-8.4.3-33.el6.x86_64.rpm 用戶工具
drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm 內核模塊
[root@localhost~]#rpm -ih \
drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm \
drbd-8.4.3-33.el6.x86_64.rpm \
warning:: Header V4 DSA/SHA1 Signature, key ID 66534c2b: NOKEY
########################################### [100%]
########################################### [ 50%]
########################################### [100%]
實驗拓撲圖:
分別在Node1和node2 安裝兩個包,並且修改hosts文件夠解析主機名,並且時間同步
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.101.200 node1.centod.com node1
172.16.34.1 node2.centod.com node2
在兩臺主機上創建兩個大小相同的分區假設2G
echo -e "n \n p \n 3 \n \n +2G \n w \n" |fdisk /dev/sda
配置文件爲分段式的
1,global { usage-count no; }是否通知drbd已經安裝用於官方計數 全局配置段
2,command {
Protocol C;
Handlers { }
Startup { }
Disk { on-io-error detach } 一旦本地drbd設備對應對應資源的磁盤發生錯誤時的處理動作 detach表示移除 同步過程也無法進行
Net { cram-hmac-alg “sha1”; 認證算法
Shared-secret “hzm132”; 密碼
}
Syncer { rate 1000M; } 傳輸速率 drbd剛啓動時要逐位對齊 要進行全盤同步
} 共享段
沒列出的使用默認配置
3,定義資源格式如下
Vim /etc/drbd.d/web.res
Resource web {
On node1.centod.com {
Device /dev/drbd0;
Disk /dev/sdb3;
Address 172.16.101.200:7789;
Meta-disk internal;
}
On node2.centod.com {
Device /dev/drbd0;
Disk /dev/sdb3;
Address 172.16.34.1;
Meta-disk internal;
}
分別在兩臺主機執行初始化:
Drbdadm create-md web
[need to type 'yes' to confirm] yes
Writing meta data...
initializing activity log
NOT initializing bitmap
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
New drbd meta data block successfully created.
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
此時初始化成功
然後在兩臺主機啓動drbd服務
[root@localhost ~]# service drbd start
Starting DRBD resources: [
create res: web
prepare disk: web
adjust disk: web
adjust net: web
]
..........
***************************************************************
DRBD's startup script waits for the peer node(s) to appear.
- In case this node was already a degraded cluster before the
reboot the timeout is 0 seconds. [degr-wfc-timeout]
- If the peer was available before the reboot the timeout will
expire after 0 seconds. [wfc-timeout]
(These values are for resource 'web'; 0 sec -> wait forever)
To abort waiting enter 'yes' [ 14]:
.查看狀態
cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00
0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:2103412
剛啓動時默認爲Secondary/Secondary模式
強制提升其中一個爲主節點
[root@localhost ~]# drbdadm primary --force web
[root@localhost ~]# cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n-
ns:217216 nr:0 dw:0 dr:224928 al:0 bm:13 lo:2 pe:3 ua:8 ap:0 ep:1 wo:f oos:188837
[=>..................] sync'ed: 10.4% (1888372/2103412)K 開始同步
finish: 0:00:26 speed: 71,680 (71,680) K/sec
查看狀態
[root@localhost ~]# cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:2103412 nr:0 dw:0 dr:2104084 al:0 bm:129 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
此時以爲主從模式
格式化主節點
[root@localhost ~]# mke2fs -t ext4 /dev/drbd0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
131648 inodes, 525853 blocks
26292 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=541065216
17 block groups
32768 blocks per group, 32768 fragments per group
7744 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 39 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
掛載主節點並寫入數據
[root@localhost ~]# mount /dev/drbd0 /mnt
[root@localhost ~]# cp /etc/issue /mnt
[root@localhost ~]# ls /mnt
issue lost+found
查看是否數據已經同步
卸載主節點設爲從節點
[root@localhost ~]# umount /dev/drbd0
[root@localhost ~]# drbdadm secondary web
[root@localhost ~]# cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00
0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
ns:2202732 nr:0 dw:99320 dr:2104805 al:27 bm:129 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
將node2提升爲主節點上掛載設備查看是否有數據
[root@node2 ~]# drbdadm primary web
[root@node2 ~]# cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:0 nr:2202732 dw:2202732 dr:672 al:0 bm:129 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
[root@node2 ~]# mount /dev/drbd0 /mnt
[root@node2 ~]# ls /mnt
issue lost+found
至此實驗成功
http+corosync+pacemaker+drbd實現基於drbd存儲的高可用集羣
實驗環境爲centos6.5 使用上面兩臺主機繼續進行下面的配置
首先在兩臺主機安裝高可用集羣套件:
[root@localhost ~]# yum install corosync pacemaker
安裝高可用命令行配置工具crmsh 依賴於pssh redhat-rpm-config
yum install crmsh-1.2.6-4.el6.x86_64.rpm \
pssh-2.3.1-2.el6.x86_64.rpm \
redhat-rpm-config-9.0.3-42.el6.centos.noarch.rpm \
Corosync配置文件爲:
compatibility: whitetank
totem {
version: 2
secauth: on
threads: 0
interface {
ringnumber: 0
bindnetaddr: 172.16.0.0
mcastaddr: 226.194.231.1
mcastport: 5405
ttl: 1
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: no
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
service {
ver: 0
name: pacemaker
# user_mgmtd: yes
}
aisexec {
user: root
group: root
}
生成祕鑰
corosync-keygen
如果此時熵池中數據不夠另起一終端敲鍵盤
配置高可用資源
[root@node1 corosync]# crm
crm(live)# status
Last updated: Thu Sep 18 16:12:48 2014
Last change: Thu Sep 18 16:07:11 2014 via crmd on node2.centod.com
Stack: classic openais (with plugin)
Current DC: node2.centod.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
0 Resources configured
Online: [ node1.centod.com node2.centod.com ]
Vip資源代理
crm(live)# configure
crm(live)configure# primitive vip ocf:heartbeat:IPaddr params ip=172.16.101.220
crm(live)configure# verify
Httpd資源代理
crm(live)configure# primitive httpd lsb:httpd op monitor interval=20s timeout=20s op start timeout=20s op stop timeout=20s
crm(live)configure# verify
定義drbd主資源
crm(live)configure# primitive drbd ocf:linbit:drbd params drbd_resource=web op monitor role=Master interval=20s timeout=20s op monitor role=Slave interval=20s timeout=20s op start timeout=240s op stop timeout=100s
crm(live)configure# verify
定義克隆資源同時會指定主從資源
crm(live)configure# master ms-webdrbd drbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=ture
crm(live)configure# verify
定義存儲資源
crm(live)configure# primitive httpd-storage ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/var/www/html fstype=etx4 op monitor interval=20s timeout=40s op start timeout=60s op stop timeout=60s
crm(live)configure# verify
定義位置約束關係
crm(live)configure# colocation storage-nrbd-master-httpd-vip inf: httpd-storage ms-webdrbd:Master httpd vip
crm(live)configure# verify
定義順序約束
crm(live)configure# order nrbd-storage-vip-httpd inf: ms-webdrbd:promote httpd-storage:start vip httpd
crm(live)configure# verify
無誤提交
Commit
查看結果
Crm configure show
node node2.centod.com
node node3.centod.com
primitive drbd ocf:linbit:drbd \
params drbd_resource="web" \
op monitor role="Master" interval="20s" timeout="30s" \
op monitor role="Slave" interval="40s" timeout="30" \
op start timeout="240s" interval="0" \
op stop timeout="100s" interval="0"
primitive httpd lsb:httpd \
op monitor interval="20s" timeout="20s" \
op start timeout="20s" interval="0" \
op stop timeout="20s" interval="0" \
meta target-role="Started"
primitive httpd-storage ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/var/www/html" fstype="ext4" \
op monitor interval="20s" timeout="40s" \
op start timeout="60s" interval="0" \
op stop timeout="60s" interval="0" \
meta target-role="Started"
primitive vip ocf:heartbeat:IPaddr \
params ip="172.16.101.220" \
meta target-role="Started"
ms ms-webdrbd drbd \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" target-role="Started"
colocation storage-nrbd-master-httpd-vip inf: httpd-storage ms-webdrbd:Master httpd vip
order nrbd-storage-vip-httpd inf: ms-webdrbd:promote httpd-storage:start vip httpd
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
查看高可用集羣狀態信息
Online: [ node1.centod.com ]
OFFLINE: [ node2.centod.com ]
vip(ocf::heartbeat:IPaddr):Started node1.centod.com
httpd(lsb:httpd):Started node1.centod.com
Master/Slave Set: ms-webdrbd [drbd]
Masters: [ node1.centod.com ]
Stopped: [ node2.centod.com ]
httpd-storage(ocf::heartbeat:Filesystem):Started node1.centod.com