實驗目的:
利用corosyne/openais + ldirectord 實現LVS(DR)中的Director 的高可用
實驗環境:
ReadHat 5.8
VIP 172.16.45.2
Real Server:
RS1 172.16.45.5
RS2 172.16.45.6
Director:
node1.yue.com 172.16.45.11
node2.yue.com 172.16.45.12
需要用到的rpm包:
cluster-glue-1.0.6-1.6.el5.i386.rpm
cluster-glue-libs-1.0.6-1.6.el5.i386.rpm
corosync-1.2.7-1.1.el5.i386.rpm
corosynclib-1.2.7-1.1.el5.i386.rpm
heartbeat-3.0.3-2.3.el5.i386.rpm
heartbeat-libs-3.0.3-2.3.el5.i386.rpm
ldirectord-1.0.1-1.el5.i386.rpm
libesmtp-1.0.4-5.el5.i386.rpm
openais-1.1.3-1.6.el5.i386.rpm
openaislib-1.1.3-1.6.el5.i386.rpm
pacemaker-1.1.5-1.1.el5.i386.rpm
cts-1.1.5-1.1.el5.i386.rpm
pacemaker-libs-1.1.5-1.1.el5.i386.rpm
perl-MailTools-2.08-1.el5.rf.noarch.rpm
perl-Pod-Escapes-1.04-1.2.el5.rf.noarch.rpm
perl-Pod-Simple-3.07-1.el5.rf.noarch.rpm
perl-Test-Pod-1.42-1.el5.rf.noarch.rpm
perl-TimeDate-1.16-5.el5.noarch.rpm
resource-agents-1.0.4-1.1.el5.i386.rpm
另外準備好系統光盤,作爲yum源
一、先配置Real Server
1. 同步兩臺Real Server的時間
# hwclock -s
2. 安裝 apache
# yum -y install httpd
爲兩臺Real Server提供網頁文件
- [root@RS1 ~]# echo "<h1>Real Server 1<h1>" > /var/www/html/index.html
- [root@RS2 ~]# echo "<h1>Real Server 2<h1>" > /var/www/html/index.html
- [root@RS1 ~]# vi /etc/httpd/conf/httpd.conf
- 更改:ServerName RS1.yue.com
- [root@RS2 ~]# vi /etc/httpd/conf/httpd.conf
- 更改:ServerName RS2.yue.com
# /etc/init.d/httpd start
3. 在RS1上編輯內核的相關參數(此時配置的內核參數和網卡都是臨時生效的,若想要永久有效需要寫入相應的配置文件中)
- [root@RS1 ~]# echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
- [root@RS1 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
- [root@RS1 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
- [root@RS1 ~]# echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
- [root@RS1 ~]# ifconfig lo:0 172.16.45.2 broadcast 172.16.45.255 netmask 255.255.255.255 up 配置vip
- [root@RS1 ~]# ifconfig
- eth0 Link encap:Ethernet HWaddr 00:0C:29:7E:8B:C6
- inet addr:172.16.45.5 Bcast:172.16.255.255 Mask:255.255.0.0
- UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
- RX packets:144986 errors:0 dropped:0 overruns:0 frame:0
- TX packets:39438 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:29527500 (28.1 MiB) TX bytes:5000577 (4.7 MiB)
- Interrupt:67 Base address:0x2000
- lo Link encap:Local Loopback
- inet addr:127.0.0.1 Mask:255.0.0.0
- UP LOOPBACK RUNNING MTU:16436 Metric:1
- RX packets:140 errors:0 dropped:0 overruns:0 frame:0
- TX packets:140 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:17628 (17.2 KiB) TX bytes:17628 (17.2 KiB)
- lo:0 Link encap:Local Loopback
- inet addr:172.16.45.2 Mask:255.255.255.255
- UP LOOPBACK RUNNING MTU:16436 Metric:1
- [root@RS1 ~]# elinks -dump http://172.16.45.2 測試是否正常
- Real Server 1
- [root@RS1 ~]# elinks -dump http://172.16.45.5
- Real Server 1
設定服務開機自動啓動
- [root@RS1 ~]# chkconfig --add httpd
- [root@RS1 ~]# chkconfig httpd on
- [root@RS1 ~]# chkconfig --list httpd
- httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
4. 在RS2 上做同樣的設置
- [root@RS2 ~]# elinks -dump http://172.16.45.2 測試是否正常
- Real Server 2
- [root@RS2 ~]# elinks -dump http://172.16.45.6
- Real Server 2
二、配置 Director
1. 雙機互信
# ssh-keygen -t rsa
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
2. 主機名
# vi /etc/hosts
172.16.45.11 node1.yue.com node1
172.16.45.12 node2.yue.com node2
3. 時間同步
# hwcolock -s
4.安裝上面提到的相關rpm包
# yum -y --nogpgcheck localinstall *.rpm
5. 將rpm包傳送給node2並安裝
- [root@node1 tmp]# scp *.rpm node2:/tmp
[root@node1 tmp]# ssh node2 'yum -y --nogpgcheck localinstall /tmp/*.rpm'
6. 關閉 heartbeat服務
- [root@node1 ~]# chkconfig --list heartbeat
- heartbeat 0:off 1:off 2:on 3:on 4:on 5:on 6:off
- [root@node1 ~]# chkconfig heartbeat off
- [root@node1 ~]# ssh node2 'chkconfig heartbeat off'
7. 提供 corosync的配置文件
- [root@node1 ~]# cd /etc/corosync/
- [root@node1 corosync]# cp corosync.conf.example corosync.conf
- [root@node1 corosync]# vi /etc/corosync/corosync.conf
- compatibility: whitetank 兼容性,兼容以前的版本
- totem { 多個corosynce 的節點之間心跳信息的傳遞方式
- version: 2
- secauth: off 安全認證
- threads: 0 啓動幾個線程
- interface { 通過哪個網絡接口傳遞心跳信息,若有多個接口,則ringnumber不能相同
- ringnumber: 0
- bindnetaddr: 172.16.45.0 綁定的網絡地址
- mcastaddr: 226.94.100.1 多播地址
- mcastport: 5405
- }
- }
- logging {
- fileline: off
- to_stderr: no 發送到標準錯誤輸出
- to_logfile: yes
- # to_syslog: yes
- logfile: /var/log/corosync.log
- debug: off
- timestamp: on 是否記錄時間戳
- logger_subsys {
- subsys: AMF 想要啓用AMF 需要安裝OpenAIS 和OpenAis-libs
- debug: off
- }
- }
- amf {
- mode: disabled
- }
- # 另外添加如下內容:
service {
ver: 0
name: pacemaker
use_mgmtd: yes
}
aisexec {
user: root
group: root
}
8. 節點密鑰文件
- [root@node1 corosync]# corosync-keygen 生成節點密鑰文件
- Corosync Cluster Engine Authentication key generator.
- Gathering 1024 bits for key from /dev/random.
- Press keys on your keyboard to generate entropy.
- Writing corosync key to /etc/corosync/authkey.
9. 提供ldirectord的配置文件
[root@node1 corosync]# cp /usr/share/doc/ldirectord-1.0.1/ldirectord.cf /etc/ha.d/- [root@node1 corosync]# vi /etc/ha.d/ldirectord.cf
- checktimeout=3
- checkinterval=1
- autoreload=yes
- quiescent=no
- virtual=172.16.45.2:80
- real=172.16.45.5:80 gate
- real=172.16.45.6:80 gate
- fallback=127.0.0.1:80 gate
- service=http
- scheduler=rr
- # persistent=600
- # netmask=255.255.255.255
- protocol=tcp
- checktype=negotiate
- checkport=80
- request="test.html"
- receive="Real Server OK"
10. 將配置文件傳送到node2
- [root@node1 corosync]# scp -p authkey corosync.conf node2:/etc/corosync/
- authkey 100% 128 0.1KB/s 00:00
- corosync.conf 100% 526 0.5KB/s 00:00
- [root@node1 corosync]# scp /etc/ha.d/ldirectord.cf node2:/etc/ha.d/
- ldirectord.cf 100% 7593 7.4KB/s 00:00
11. 啓動corosync 服務
- [root@node1 ~]# /etc/init.d/corosync start
- Starting Corosync Cluster Engine (corosync): [ OK ]
- [root@node1 corosync]# netstat -unlp
- udp 0 0 172.16.45.11:5405 0.0.0.0:* 4019/corosync
- udp 0 0 226.94.100.1:5405 0.0.0.0:* 4019/corosync
查看corosync引擎是否正常啓動
- [root@node1 corosync]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/corosync.log
- Aug 05 17:32:43 corosync [MAIN ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.
- Aug 05 17:32:43 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
- Aug 05 17:33:48 corosync [MAIN ] Corosync Cluster Engine exiting with status 0 at main.c:170.
- Aug 05 17:34:17 corosync [MAIN ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.
- Aug 05 17:34:17 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
查看初始化成員節點通知是否正常發出
- [root@node1 corosync]# grep "TOTEM" /var/log/corosync.log
- Aug 05 17:32:43 corosync [TOTEM ] Initializing transport (UDP/IP).
- Aug 05 17:32:43 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
- Aug 05 17:32:44 corosync [TOTEM ] The network interface [172.16.45.11] is now up.
- Aug 05 17:32:44 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
- Aug 05 17:34:17 corosync [TOTEM ] Initializing transport (UDP/IP).
- Aug 05 17:34:17 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
- Aug 05 17:34:17 corosync [TOTEM ] The network interface [172.16.45.11] is now up.
- Aug 05 17:34:18 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
查看pacemaker是否正常啓動
- [root@node1 corosync]# grep pcmk_startup /var/log/corosync.log
- Aug 05 17:32:44 corosync [pcmk ] info: pcmk_startup: CRM: Initialized
- Aug 05 17:32:44 corosync [pcmk ] Logging: Initialized pcmk_startup
- Aug 05 17:32:44 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 4294967295
- Aug 05 17:32:44 corosync [pcmk ] info: pcmk_startup: Service: 9
- Aug 05 17:32:44 corosync [pcmk ] info: pcmk_startup: Local hostname: node1.yue.com
- Aug 05 17:34:18 corosync [pcmk ] info: pcmk_startup: CRM: Initialized
- Aug 05 17:34:18 corosync [pcmk ] Logging: Initialized pcmk_startup
- Aug 05 17:34:18 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 4294967295
- Aug 05 17:34:18 corosync [pcmk ] info: pcmk_startup: Service: 9
- Aug 05 17:34:18 corosync [pcmk ] info: pcmk_startup: Local hostname: node1.yue.com
檢查啓動過程中是否有錯誤產生
- [root@node1 corosync]# grep ERROR: /var/log/corosync.log | grep -v unpack_resources
- Aug 05 17:32:45 corosync [pcmk ] ERROR: pcmk_wait_dispatch: Child process mgmtd exited (pid=4764, rc=100)
- Aug 05 17:34:19 corosync [pcmk ] ERROR: pcmk_wait_dispatch: Child process mgmtd exited (pid=4865, rc=100)
如果上面命令執行均沒有問題,接着可以執行如下命令啓動node2上的corosync
注意:啓動node2需要在node1上使用如上命令進行,不要在node2節點上直接啓動;
- [root@node1 corosync]# ssh node2 '/etc/init.d/corosync start'
- Starting Corosync Cluster Engine (corosync): [ OK ]
使用如下命令查看集羣節點的啓動狀態:
- [root@node1 ~]# crm status
- ============
- Last updated: Sun Aug 5 17:44:02 2012
- Stack: openais
- Current DC: node1.yue.com - partition with quorum
- Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
- 2 Nodes configured, 2 expected votes 2個節點
- 0 Resources configured. 0個資源
- ============
- Online: [ node1.yue.com node2.yue.com ]
配置集羣的工作屬性,禁用stonith
corosync默認啓用了stonith,而當前集羣並沒有相應的stonith設備,因此此默認配置目前尚不可用,這可以通過如下命令驗正:
- [root@node1 ~]# crm_verify -L
- crm_verify[4928]: 2012/08/05_17:44:59 ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
- crm_verify[4928]: 2012/08/05_17:44:59 ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
- crm_verify[4928]: 2012/08/05_17:44:59 ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
- Errors found during check: config not valid
- -V may provide more details
我們裏可以通過如下方式先禁用stonith:
(當然也可以直接使用命令:# crm configure property stonith-enabled=false 來實現)
- [root@node1 ~]# crm 進入crm的交互模式,在每層都可以使用help查看在當前位置可以使用的命令
- crm(live)# configure
- crm(live)configure# property stonith-enabled=false
- crm(live)configure# verify 檢查語法
- crm(live)configure# commit 提交
- crm(live)configure# show
- node node1.yue.com
- node node2.yue.com
- property $id="cib-bootstrap-options" \
- dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
- cluster-infrastructure="openais" \
- expected-quorum-votes="2" \
- stonith-enabled="false" stonith已經被禁用
- 上面的crm,crm_verify命令是1.0後的版本的pacemaker提供的基於命令行的集羣管理工具;可以在集羣中的任何一個節點上執行。
三、爲集羣添加集羣資源
corosync支持heartbeat,LSB和ocf等類型的資源代理,目前較爲常用的類型爲LSB和OCF兩類,stonith類專爲配置stonith設備而用;
可以通過如下命令查看當前集羣系統所支持的類型:
- # crm ra classes
- heartbeat
- lsb
- ocf / heartbeat pacemaker
- stonith
如果想要查看某種類別下的所用資源代理的列表,可以使用類似如下命令實現:
# crm ra list lsb
# crm ra list ocf heartbeat
# crm ra list ocf pacemaker
# crm ra list stonith
# crm ra info [class:[provider:]]resource_agent
例如:
# crm ra info ocf:heartbeat:IPaddr
- [root@node1 ~]# crm
- crm(live)# resource
- crm(live)resource# status 查看資源的狀態
- Resource Group: web
- Web_server (lsb:httpd) Started
- WebIP (ocf::heartbeat:IPaddr) Started
- crm(live)resource# stop web 停止一個資源
- crm(live)resource# status
- Resource Group: web
- Web_server (lsb:httpd) Stopped
- WebIP (ocf::heartbeat:IPaddr) Stopped
- crm(live)resource#
- crm(live)configure# delete web 刪除一個組
1. 接下來要爲web集羣創建一個IP地址資源,以在通過集羣提供web服務時使用;這可以通過如下方式實現:
定義資源的語法:
- primitive <rsc> [<class>:[<provider>:]]<type>
- [params attr_list]
- [operations id_spec]
- [op op_type [<attribute>=<value>...] ...]
- op_type :: start | stop | monitor
- 例子:
- primitive apcfence stonith:apcsmart \
- params ttydev=/dev/ttyS0 hostlist="node1 node2" \
- op start timeout=60s \
- op monitor interval=30m timeout=60s
- 定義IP資源時的一些參數:
- Parameters (* denotes required, [] the default):
- ip* (string): IPv4 address
- The IPv4 address to be configured in dotted quad notation, for example "192.168.1.1".
- nic (string, [eth0]): Network interface
- The base network interface on which the IP address will be brought
- cidr_netmask (string): Netmask
- The netmask for the interface in CIDR format. (ie, 24), or in dotted quad notation 255.255.255.0).
- broadcast (string): Broadcast address
- lvs_support (boolean, [false]): Enable support for LVS DR
- Operations' defaults (advisory minimum):
- start timeout=20s
- stop timeout=20s
- monitor interval=5s timeout=20s
定義IP資源
# crm
- crm(live)# configure
- crm(live)configure# primitive Web_IP ocf:heartbeat:IPaddr2 params ip=172.16.45.2 nic=eth0 cidr_netmask=32 broadcast=172.16.45.255 lvs_support=true
- crm(live)configure# show
- node node1.yue.com
- node node2.yue.com
- primitive Web_IP ocf:heartbeat:IPaddr2 \
- params ip="172.16.45.2" nic="eth0" cidr_netmask="32" broadcast="172.16.45.255" lvs_support="true"
- primitive Web_ldirectord ocf:heartbeat:ldirectord
- property $id="cib-bootstrap-options" \
- dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
- cluster-infrastructure="openais" \
- expected-quorum-votes="2" \
- stonith-enabled="false"
- crm(live)configure# verify
- crm(live)configure# commit 提交
- crm(live)configure# cd
- crm(live)# status 查看集羣狀態
- ============
- Last updated: Sun Aug 5 19:45:08 2012
- Stack: openais
- Current DC: node1.yue.com - partition with quorum
- Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
- 2 Nodes configured, 2 expected votes
- 1 Resources configured.
- ============
- Online: [ node1.yue.com node2.yue.com ]
- Web_IP (ocf::heartbeat:IPaddr2): Started node2.yue.com
- crm(live)# bye
關於ldirectord 資源定義時的的一些參數:
- Parameters (* denotes required, [] the default):
- configfile (string, [/etc/ha.d/ldirectord.cf]): configuration file path
- The full pathname of the ldirectord configuration file.
- ldirectord (string, [/usr/sbin/ldirectord]): ldirectord binary path
- The full pathname of the ldirectord.
- Operations' defaults (advisory minimum):
- start timeout=15
- stop timeout=15
- monitor interval=20 timeout=10
定義 ldirectorce資源:
- [root@node1 ~]# crm
- crm(live)# configure
- crm(live)configure# primitive Web_ldirectord ocf:heartbeat:ldirectord
- crm(live)configure# show
- node node1.yue.com
- node node2.yue.com
- primitive Web_ldirectord ocf:heartbeat:ldirectord
- property $id="cib-bootstrap-options" \
- dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
- cluster-infrastructure="openais" \
- expected-quorum-votes="2" \
- stonith-enabled="false"
- crm(live)configure# commit 提交
- crm(live)configure# cd
- crm(live)# status
- ============
- Last updated: Sun Aug 5 19:44:05 2012
- Stack: openais
- Current DC: node1.yue.com - partition with quorum
- Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
- 2 Nodes configured, 2 expected votes
- 1 Resources configured.
- ============
- Online: [ node1.yue.com node2.yue.com ]
- Web_IP (ocf::heartbeat:IPaddr2): Started node2.yue.com
- Web_ldirectord (ocf::heartbeat:ldirectord): Started node1.yue.com
查看Web_IP狀態:
- [root@node2 tmp]# ip addr show 注意此時是在node2上
- 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
- link/ether 00:0c:29:d9:75:df brd ff:ff:ff:ff:ff:ff
- inet 172.16.45.12/16 brd 172.16.255.255 scope global eth0
- inet 172.16.45.2/32 brd 172.16.45.255 scope global eth0
定義資源 排列約束(Colocation)
Usage:
colocation <id> <score>: <rsc>[:<role>] <rsc>[:<role>] ...
^ ^ ^ ^
約束名稱 分數 資源1 資源2 . . .
Example:
colocation dummy_and_apache -inf: apache dummy
colocation c1 inf: A ( B C )
colocation webip_with_webserver inf: WebIP Web_server
定義資源 順序約束(Order) 寫在前面的先啓動,後停止 具體可以使用:show xml 來查看
Usage:
order <id> score-type: <rsc>[:<action>] <rsc>[:<action>] ...
[symmetrical=<bool>]
score-type :: advisory | mandatory | <score>
^ ^
建議值 必須這樣做
Example:
order c_apache_1 mandatory: apache:start ip_1 --> 先啓動apache後啓動ip_1
order o1 inf: A ( B C ) --> 先啓動 B C ,再啓動 A
order webserver_after_webip mandatory: Web_server:start WebIP
定義資源 位置約束(location)
Usage:
location <id> <rsc> {node_pref|rules}
^ ^ ^
名字 資源 分數: 節點名稱
node_pref :: <score>: <node>
rules ::
rule [id_spec] [$role=<role>] <score>: <expression>
[rule [id_spec] [$role=<role>] <score>: <expression> ...]
Examples:
location conn_1 internal_www 100: node1
location webserver_on_node1 Web_server inf: node1.yue.com
location conn_1 internal_www \
rule 50: #uname eq node1 \
rule pingd: defined pingd
location conn_2 dummy_float \
rule -inf: not_defined pingd or pingd number:lte 0
排列約束
- [root@node1 ~]# crm
- crm(live)# configure
- crm(live)configure# colocation Web_IP_with_Web_ld inf: Web_IP Web_ldirectord
- crm(live)configure# verify
- crm(live)configure# show
- node node1.yue.com
- node node2.yue.com
- primitive Web_IP ocf:heartbeat:IPaddr2 \
- params ip="172.16.45.2" nic="eth0" cidr_netmask="32" broadcast="172.16.45.255" lvs_support="true"
- primitive Web_ldirectord ocf:heartbeat:ldirectord
- colocation Web_IP_with_Web_ld inf: Web_IP Web_ldirectord
- property $id="cib-bootstrap-options" \
- dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
- cluster-infrastructure="openais" \
- expected-quorum-votes="2" \
- stonith-enabled="false"
- crm(live)configure# commit
- crm(live)configure# cd
- crm(live)# status
- ============
- Last updated: Sun Aug 5 19:50:51 2012
- Stack: openais
- Current DC: node1.yue.com - partition with quorum
- Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
- 2 Nodes configured, 2 expected votes
- 2 Resources configured.
- ============
- Online: [ node1.yue.com node2.yue.com ]
- Web_ldirectord (ocf::heartbeat:ldirectord): Started node1.yue.com
- Web_IP (ocf::heartbeat:IPaddr2): Started node1.yue.com
- crm(live)# exit
- [root@node1 ~]# ipvsadm -Ln
- IP Virtual Server version 1.2.1 (size=4096)
- Prot LocalAddress:Port Scheduler Flags
- -> RemoteAddress:Port Forward Weight ActiveConn InActConn
- TCP 172.16.45.2:80 rr
- -> 172.16.45.6:80 Route 1 0 1
- -> 172.16.45.5:80 Route 1 0 0
順序約束
- [root@node1 ~]# crm
- crm(live)# configure
- crm(live)configure# order ld_after_ip mandatory: Web_IP Web_ldirectord
- crm(live)configure# verify
- crm(live)configure# show
- node node1.yue.com
- node node2.yue.com
- primitive Web_IP ocf:heartbeat:IPaddr2 \
- params ip="172.16.45.2" nic="eth0" cidr_netmask="32" broadcast="172.16.45.255" lvs_support="true"
- primitive Web_ldirectord ocf:heartbeat:ldirectord
- colocation Web_IP_with_Web_ld inf: Web_IP Web_ldirectord
- order ld_after_ip inf: Web_IP Web_ldirectord
- property $id="cib-bootstrap-options" \
- dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
- cluster-infrastructure="openais" \
- expected-quorum-votes="2" \
- stonith-enabled="false"
- crm(live)configure# commit
- crm(live)configure# bye
- [root@node1 ~]# ssh node2 'crm node standby' 讓執行命令的節點 standby
- [root@node1 ~]# crm status
- ============
- Last updated: Sun Aug 5 20:08:19 2012
- Stack: openais
- Current DC: node1.yue.com - partition with quorum
- Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
- 2 Nodes configured, 2 expected votes
- 2 Resources configured.
- ============
- Node node1.yue.com: standby
- Online: [ node2.yue.com ]
- Web_ldirectord (ocf::heartbeat:ldirectord): Started node1.yue.com
- Web_IP (ocf::heartbeat:IPaddr2): Started node1.yue.com
- [root@node1 tmp]# ipvsadm -Ln
- IP Virtual Server version 1.2.1 (size=4096)
- Prot LocalAddress:Port Scheduler Flags
- -> RemoteAddress:Port Forward Weight ActiveConn InActConn
- TCP 172.16.45.2:80 rr
- -> 172.16.45.6:80 Route 1 0 2
- -> 172.16.45.5:80 Route 1 0 1