利用LVS+heartbeat實現高可用性羣集

Heartbeat簡介

Heartbeat 項目是 Linux-HA 工程的一個組成部分,它實現了一個高可用集羣系統。心跳服務和集羣通信是高可用集羣的兩個關鍵組件,在 Heartbeat 項目裏,由 heartbeat 模塊實現了這兩個功能。下面描述了 heartbeat 模塊的可靠消息通信機制,並對其實現原理做了一些介紹。

隨着Linux在關鍵行業應用的逐漸增多,它必將提供一些原來由IBM和SUN這樣的大型商業公司所提供的服務,這些商業公司所提供的服務都有一個關鍵特性,就是高可用集羣。

heartbeat的原來

heartbeat (Linux-HA)的工作原理:heartbeat最核心的包括兩個部分,心跳監測部分和資源接管部分,心跳監測可以通過網絡鏈路和串口進行,而且支持冗 餘鏈路,它們之間相互發送報文來告訴對方自己當前的狀態,如果在指定的時間內未受到對方發送的報文,那麼就認爲對方失效,這時需啓動資源接管模塊來接管運 行在對方主機上的資源或者服務。

高可用集羣

高可用集羣是指一組通過硬件和軟件連接起來的獨立計算機,它們在用戶面前表現爲一個單一系統,在這樣的一組計算機系統內部的一個或者多個節點停止工作,服務會從故障節點切換到正常工作的節點上運行,不會引起服務中斷。從這個定義可以看出,集羣必須檢測節點和服務何時失效,何時恢復爲可用。這個任務通常由一組被稱爲“心跳”的代碼完成。在Linux-HA裏這個功能由一個叫做heartbeat的程序完成。

消息通信模型

Heartbeat包括以下幾個組件:

heartbeat – 節點間通信校驗模塊

CRM - 集羣資源管理模塊

CCM - 維護集羣成員的一致性

LRM - 本地資源管理模塊

StonithDaemon - 提供節點重啓服務

logd - 非阻塞的日誌記錄

apphbd - 提供應用程序級的看門狗計時器

Recovery Manager - 應用故障恢復

底層結構–包括插件接口、進程間通信等

CTS – 集羣測試系統,集羣壓力測試

這裏主要分析的是Heartbeat的集羣通信機制,所以這裏主要關注的是heartbeat模塊。

heartbeat模塊由以下幾個進程構成:

master進程(masterprocess)

FIFO子進程(fifochild)

read子進程(readchild)

write子進程(writechild)

在heartbeat裏每一條通信通道對應於一個write子進程和一個read子進程,假設n是通信通道數,p爲heartbeat模塊的進程數,則p、n有以下關係:

p=2*n+2

在heartbeat裏,master進程把自己的數據或者是客戶端發送來的數據,通過IPC發送到write子進程,write子進程把數據發送到網絡;同時read子進程從網絡讀取數據,通過IPC發送到master進程,由master進程處理或者由master進程轉發給其客戶端處理。

Heartbeat啓動的時候,由master進程來啓動FIFO子進程、write子進程和read子進程,最後再啓動client進程。

可靠消息通信

Heartbeat通過插件技術實現了集羣間的串口、多播、廣播和組播通信,在配置的時候可以根據通信媒介選擇採用的通信協議,heartbeat啓動的時候檢查這些媒介是否存在,如果存在則加載相應的通信模塊。這樣開發人員可以很方便地添加新的通信模塊,比如添加紅外線通信模塊。

對於高可用集羣系統,如果集羣間的通信不可靠,那麼很明顯集羣本身也不可靠。Heartbeat採用UDP協議和串口進行通信,它們本身是不可靠的,可靠性必須由上層應用來提供。那麼怎樣保證消息傳遞的可靠性呢?

Heartbeat通過冗餘通信通道和消息重傳機制來保證通信的可靠性。Heartbeat檢測主通信鏈路工作狀態的同時也檢測備用通信鏈路狀態,並把這一狀態報告給系統管理員,這樣可以大大減少因爲多重失效引起的集羣故障不能恢復。例如,某個工作人員不小心撥下了一個備份通信鏈路,一兩個月以後主通信鏈路也失效了,系統就不能再進行通信了。通過報告備份通信鏈路的工作狀態和主通信鏈路的狀態,可以完全避免這種情況。因爲這樣在主通信鏈路失效以前,就可以檢測到備份工作鏈路失效,從而在主通信鏈路失效前修復備份通信鏈路。

Heartbeat通過實現不同的通信子系統,從而避免了某一通信子系統失效而引起的通信失效。最典型的就是採用以太網和串口相結合的通信方式。這被認爲是當前的最好實踐,有幾個理由可以使我們選擇採用串口通信:

(1)IP通信子系統的失效不太可能影響到串口子系統。

(2)串口不需要複雜的外部設備和電源。

(3)串口設備簡單,在實踐中非常可靠。

(4)串口可以非常容易地專用於集羣通信。

(5)串口的直連線因爲偶然性掉線事件很少。

不管是採用串口還是以太網IP協議進行通信,heartbeat都實現了一套消息重傳協議,保證消息包的可靠傳遞。實現消息包重傳有兩種協議,一種是發送者發起,另一種是接收者發起。

對於發送者發起協議,一般情況下接收者會發送一個消息包的確認。發送者維護一個計時器,並在計時器到時的時候重傳那些還沒有收到確認的消息包。這種方法容易引起發送者溢出,因爲每一臺機器的每一個消息包都需要確認,使得要發送的消息包成倍增長。這種現像被稱爲發送者(或者ACK)內爆(implosion)。

對於接收者發起協議,採用這種協議通信雙方的接收者通過序列號負責進行錯誤檢測。當檢測到消息包丟失時,接收者請求發送者重傳消息包。採用這種方法,如果消息包沒有被送達任何一個接收者,那麼發送者容易因NACK溢出,因爲每個接收者都會向發送者發送一個重傳請求,這會引起發送者的負載過高。這種現像被稱爲NACK內爆(implosion)。

Heartbeat實現的是接收者發起協議的一個變種,它採用計時器來限制過多的重傳,在計時器時間內限制接收者請求重傳消息包的次數,這樣發送者重傳消息包的次數也被相應的限制了,從而嚴格的限制了NACK內爆。

可靠消息通信的實現

一般集羣通信有兩類消息包,一類是心跳消息包,這類消息包通告集羣內節點的存活情況;另一類是控制消息包,這類消息包負責集羣的節點和資源管理。heartbeat把心跳消息包看成是控制消息包的一個特例,採用相同的通信通道進行發送,這使得協議的實現簡單化,而且很有效,並把相應的代碼限制在幾百行之內。

在heartbeat裏,一切流向網絡的數據都由master進程發送到write子進程進行發送。master進程調用send_cluster_msg()函數把消息發送到所有的write子進程。下面通過一些代碼片段看看heartbeat是怎麼發送消息的。在介紹代碼之前先介紹相關的重要數據結構

Heartbeat的消息包數據結構structha_msg{intnfields;/*消息包數據域的個數*/intnalloc;/*己分配的內存塊個數*/char**names;/*消息包數據域的名稱*/size_t*nlens;/*各個數據域稱的長度*/void**values;/*與數據域名稱對應的數據值*/size_t*vlens;/*各個數據域對應的數據值的長度*/int*types;/*消息包的類型*/};

Heartbeat的歷史消息隊列structmsg_xmit_hist{structha_msg*msgq[MAXMSGHIST];/*歷史消息隊列*/seqno_tseqnos[MAXMSGHIST];/*歷史消息序列號*/longclock_tlastrexmit[MAXMSGHIST];/*上一次重傳的時間*/intlastmsg;/*上一次重傳到的消息序列號*/seqno_thiseq;/*最大消息序列號*/seqno_tlowseq;/*最小消息序列號*/seqno_tackseq;/*確認了的消息序列號*/structnode_info*lowest_acknode;/*確認的節點*/};

代碼所屬文件heartbeat/heartbeat.c

intsend_cluster_msg(structha_msg*msg){...pid_tourpid=getpid();...

if(ourpid==processes[0]){/*來自master進程的消息*//*添加控制信息,包括源節點名,源節點全局標識符,序列號,代數,時間等*/if((msg=add_control_msg_fields(msg))!=NULL){/*可靠的多播消息包傳遞*/rc=process_outbound_packet(&msghist,msg);}}else{/*來自client進程的消息*/intffd=-1;char*smsg=NULL;

...

/*發送到FIFO進程*/

if((smsg=msg2wirefmt_noac(msg,&len))==NULL){...}elseif((ffd=open(FIFONAME,O_WRONLY|O_APPEND))nodename)==0);

/*把消息轉換成字符串*/smsg=msg2wirefmt(msg,&len);

...

if(cseq!=NULL){/*存放到歷史消息隊列裏,通過序列號記錄,如果需要,則進行重傳*/add2_xmit_hist(hist,msg,seqno);}

...

/*通過write子進程發送到所有的網絡接口上*/send_to_all_media(smsg,len);

...

returnHA_OK;}

add2_xmit_hist()函數把發送的消息發到一個歷史消息隊列裏去,隊列的最大長度爲200。如果接收者請求重傳消息,發送者通過序列號在該隊列裏查找要重傳的消息,如果找到則進行重傳。下面是相關代碼。

staticvoidadd2_xmit_hist(structmsg_xmit_hist*hist,structha_msg*msg,seqno_tseq){intslot;structha_msg*slotmsg;

...

/*查找隊列裏消息存放的位置*/slot=hist->lastmsg+1;if(slot>=MAXMSGHIST){/*到達隊尾,從頭開始。在這裏實現循環隊列*/slot=0;}

hist->hiseq=seq;slotmsg=hist->msgq[slot];

/*刪除隊列中找到的位置上的舊消息*/if(slotmsg!=NULL){hist->lowseq=hist->seqnos[slot];hist->msgq[slot]=NULL;if(!ha_is_allocated(slotmsg)){...}else{ha_msg_del(slotmsg);}}

hist->msgq[slot]=msg;hist->seqnos[slot]=seq;hist->lastrexmit[slot]=0L;hist->lastmsg=slot;

if(enable_flow_control&&live_node_count>1&&(hist->hiseq–hist->lowseq)>((MAXMSGHIST*3)/4)){/*消息隊列長度大於告警長度,記錄日誌*/...}if(enable_flow_control&&hist->hiseq–hist->ackseq>FLOWCONTROL_LIMIT){/*消息隊列的長度大於流控限制長度*/if(live_node_counthiseq–(FLOWCONTROL_LIMIT–1));all_clients_resume();}else{/*client進程發送消息過快,暫停所有的client進程*/all_clients_pause();hist_display(hist);}}

}

當發送者收到接收者的重傳請求後,通過回調函數HBDoMsg_T_REXMIT()函數調用process_rexmit()函數進行消息重傳。

#defineMAX_REXMIT_BATCH50/*每次最多重傳的消息包數*/

staticvoidprocess_rexmit(structmsg_xmit_hist*hist,structha_msg*msg){constchar*cfseq;constchar*clseq;seqno_tfseq=0;seqno_tlseq=0;seqno_tthisseq;intfirstslot=hist->lastmsg–1;intrexmit_pkt_count=0;constchar*fromnodename=ha_msg_value(msg,F_ORIG);structnode_info*fromnode=NULL;

...

/*取得要重傳的消息包的起始序列號*/if((cfseq=ha_msg_value(msg,F_FIRSTSEQ))==NULL||(clseq=ha_msg_value(msg,F_LASTSEQ))==NULL||(fseq=atoi(cfseq))lseq){/*無效序列號,記錄日誌信息*/...}

...

/*重傳丟失的消息包*/for(thisseq=fseq;thisseqtrack.ackseq){/*該消息包已經被確認過,可以忽略掉*/continue;}if(thisseqlowseq){/*序列號小於消息隊列裏的最小序列號,該消息己不存在於歷史消息隊列中*//*告知對方,不重傳該消息*/nak_rexmit(hist,thisseq,fromnodename,“seqnotoolow”);continue;}if(thisseq>hist->hiseq){/*序列號大於消息隊列中最大序列號*/...continue;}

for(msgslot=firstslot;!foundit&&msgslot!=(firstslot+1);--msgslot){char*smsg;longclock_tnow=time_longclock();longclock_tlast_rexmit;size_tlen;

...

/*重傳上一次重傳剩下的消息包*/last_rexmit=hist->lastrexmit[msgslot];

if(cmp_longclock(last_rexmit,zero_longclock)!=0&&longclockto_ms(sub_longclock(now,last_rexmit))<(ACCEPT_REXMIT_REQ_MS)){gotoNextReXmit;}

/*一次不能發送太多數據包,如果數據包太多的話,可能會引起串口溢出*/++rexm

注意:以上簡介來自互聯網百度百科!

實驗拓撲圖:

image

一.DNS服務器配置

1.1 在realserver-1上的DNS服務器配置:

[root@server1 ~]# yum install bind bind-chroot caching-nameserver –y

[root@server1 ~]# cd /var/named/chroot/etc/

[root@server1 etc]# cp -p named.caching-nameserver.conf named.conf

[root@server1 etc]# vim named.conf

15 listen-on port 53 { any; };
27 allow-query { any; };
28 allow-query-cache { any; };
37 match-clients { any; };
38 match-destinations { any; };
[root@server1 etc]# vim named.rfc1912.zones

20 zone "a.com" IN {

21 type master;

22 file "a.com.db";

23 allow-update { none; };

24 };
37 zone "145.168.192.in-addr.arpa" IN {

38 type master;

39 file "192.168.145.db";
40 allow-update { none; };

41 };
[root@server1 etc]# cd ../var/named/

[root@server1 named]# cp -p localhost.zone a.com.db

[root@server1 named]# cp -p named.local 192.168.145.db

[root@server1 named]# vim a.com.db

image

[root@server1 named]# vim 192.168.145.db

image

[root@server1 named]# service named restart

[root@server1 named]# rndc reload

1.2 在real-server-2上的DNS服務器配置:

[root@server2 ~]# yum install bind bind-chroot caching-nameserver –y

[root@server2 ~]# cd /var/named/chroot/etc/

[root@server2 etc]# cp -p named.caching-nameserver.conf named.conf

[root@server2 etc]# vim named.conf

15 listen-on port 53 { any; };
27 allow-query { any; };
28 allow-query-cache { any; };
37 match-clients { any; };
38 match-destinations { any; };
[root@server2 etc]# vim named.rfc1912.zones

20 zone "a.com" IN {

21 type master;

22 file "a.com.db";

23 allow-update { none; };

24 };
37 zone "145.168.192.in-addr.arpa" IN {

38 type master;

39 file "192.168.145.db";
40 allow-update { none; };

41 };
[root@server2 etc]# cd ../var/named/

[root@server2 named]# cp -p localhost.zone a.com.db

[root@server2 named]# cp -p named.local 192.168.145.db

[root@server2 named]# vim a.com.db

image

[root@server2 named]# vim 192.168.145.db

image

[root@server2 named]# service named restart

[root@server2 named]# rndc reload

二 . 服務器配置

編輯node1、node2的hosts文件:

image

編輯node1、node2的本地yum服務器:

image

分別在node1、node2上安裝httpd服務:

yum install httpd

拷貝heartbeat軟件包到根目錄:

image

安裝軟件包(node1、node2都得安裝):

yum localinstall -y heartbeat-2.1.4-9.el5.i386.rpm heartbeat-pils-2.1.4-10.el5.i386.rpm heartbeat-stonith-2.1.4-10.el5.i386.rpm libnet-1.1.4-3.el5.i386.rpm perl-MailTools-1.77-1.el5.noarch.rpm --nogpgcheck

vim /etc/ha.cf

image

vim /etc/ha.d/authkeys

靠md5的校驗實現驗證

(image )

vim /etc/ha.d/haresources

image

cp /etc/init.d/httpd resource.d/

scp ha.cf haresources authkeys node2.a.com:/etc/ha.d

scp /etc/init.d/httpd node2.a.com:/etc/ha.d/resource.d/

chkconfig heartbeat on

測試訪問:

image

cd /usr/lib/heartbeat/

執行./hb_standby模擬失效,可以看到

image

image

沒有任何丟包現象,任然聯通

利用LVS結合heartbeat實現高可用性:

先卸載httpd服務(node1、node2)

yum remove httpd

再安裝ipvsadm

yum install ipvsadm

vim /etc/ha.d/haresource

image

執行./hb_takeover搶回地址

發現ipvsadm -ln下面爲空

image

2.1 Node-1服務器ip地址配置

image
2.2 爲node-1添加路由

[root@node1 ~]# route add -host 192.168.145.101 dev eth0:0

[root@node1 ~]# route -n

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref Use Iface

192.168.145.101 0.0.0.0 255.255.255.255 UH 0 0 0 eth0

192.168.145.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

192.168.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1

169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1

2.3 配置本地yum服務器:

[root@node1 ~]# vim /etc/yum.repos.d/server.repo

[rhel-server]
name=Red Hat Enterprise Linux server

baseurl=file:///mnt/cdrom/Server/
enabled=1
gpgcheck=1
gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release
[rhel-cluster]
name=Red Hat Enterprise Linux cluster

baseurl=file:///mnt/cdrom/Cluster/
enabled=1
gpgcheck=1
gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release
[root@node1 ~]#mkdir /mnt/cdrom

[root@node1 ~]# mount /dev/cdrom /mnt/cdrom/

mount: block device /dev/cdrom is write-protected, mounting read-only

[root@node 1~]#yum list all
2.4 安裝配置dircetor-1服務器:

[root@node1 ~]# yum install -y ipvsadm

[root@node1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

[root@node1 ~]# ipvsadm -A -t 192.168.145.101:80 -s rr

[root@node1 ~]# ipvsadm -a -t 192.168.145.101:80 -r 192.168.145.200 -g

[root@node1 ~]# ipvsadm -a -t 192.168.145.101:80 -r 192.168.145.201 -g

[root@node1 ~]# service ipvsadm save
[root@node1 ~]# service ipvsadm restart
[root@node1 ~]# service ipvsadm stop

Clearing the current IPVS table: [ OK ]

三 . Node-2服務器配置

3.1 Node服務器ip地址配置

image
3.2 爲node-2添加路由

[root@node2 ~]# route add -host 192.168.145.101 dev eth0:0

[root@node2 ~]# route -n

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref Use Iface

192.168.145.101 0.0.0.0 255.255.255.255 UH 0 0 0 eth0

192.168.145.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

192.168.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1

169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1

0.0.0.0 192.168.145.101 0.0.0.0 UG 0 0 0 eth0

3.3 配置本地yum服務器:

[root@node 2~]# vim /etc/yum.repos.d/server.repo

[rhel-server]
name=Red Hat Enterprise Linux server

baseurl=file:///mnt/cdrom/Server/
enabled=1
gpgcheck=1
gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release
[rhel-cluster]
name=Red Hat Enterprise Linux cluster

baseurl=file:///mnt/cdrom/Cluster/
enabled=1
gpgcheck=1
gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release
[root@node 2~]#mkdir /mnt/cdrom

[root@node2 ~]# mount /dev/cdrom /mnt/cdrom/

mount: block device /dev/cdrom is write-protected, mounting read-only

[root@node2 ~]#yum list all
3.4 安裝配置dircetor-2服務器:

[root@node 2~]# yum install -y ipvsadm

[root@node2 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

[root@node2 ~]# ipvsadm -A -t 192.168.145.101:80 -s rr

[root@node2 ~]# ipvsadm -a -t 192.168.145.101:80 -r 192.168.145.200 -g

[root@node2 ~]# ipvsadm -a -t 192.168.145.101:80 -r 192.168.145.201 -g

[root@node2 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr

-> 192.168.145.201:80 Route 1 0 0

-> 192.168.145.200:80 Route 1 0 0

[root@node2 ~]# service ipvsadm save
Saving IPVS table to /etc/sysconfig/ipvsadm: [ OK ]

[root@node2 ~]# service ipvsadm restart
Clearing the current IPVS table: [ OK ]

Applying IPVS configuration: [ OK ]
四.配置real-server-1的web服務器:

4.1 解決arp問題:

[root@server1 ~]# cat /etc/sysconfig/network

NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=server1.a.com
[root@server1 ~]# echo "net.ipv4.conf.all.arp_announce = 2" >> /etc/sysctl.conf

[root@server1 ~]# echo "net.ipv4.conf.lo.arp_announce = 2" >> /etc/sysctl.conf

[root@server1 ~]# echo "net.ipv4.conf.all.arp_ignore = 1" >> /etc/sysctl.conf

[root@server1 ~]# echo "net.ipv4.conf.lo.arp_ignore = 1" >> /etc/sysctl.conf

[root@server1 ~]#sysctl -p
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.lo.arp_ignore = 1
4.2 配置ip地址和路由

[root@server1 ~]# route add -host 192.168.145.101 dev lo:0

[root@node2 named]# route -n

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref Use Iface

192.168.145.101 0.0.0.0 255.255.255.255 UH 0 0 0 lo

192.168.145.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0

0.0.0.0 192.168.145.101 0.0.0.0 UG 0 0 0 lo

4.3 配置Real-server-1的Web服務器:

[root@server1 ~]# rpm -ivh /mnt/cdrom/Server/httpd-2.2.3-31.el5.i386.rpm

[root@server1 ~]# echo "web1 -- real-server-1" > /var/www/html/index.html

[root@server1 ~]# service httpd start

Starting httpd: httpd: apr_sockaddr_info_get() failed for r1.guirong.com

httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName

[ OK ]

4.4 客戶端配置信息

image

4.5 客戶端訪問real-server-1的web服務:(橋接)

 

image

五.配置real-server2的web服務器:

5.1 解決arp問題:

[root@server2 ~]# cat /etc/sysconfig/network

NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=server2.a.com
[root@server2 ~]# echo "net.ipv4.conf.all.arp_announce = 2" >> /etc/sysctl.conf

[root@server2 ~]# echo "net.ipv4.conf.lo.arp_announce = 2" >> /etc/sysctl.conf

[root@server2 ~]# echo "net.ipv4.conf.all.arp_ignore = 1" >> /etc/sysctl.conf

[root@server2 ~]# echo "net.ipv4.conf.lo.arp_ignore = 1" >> /etc/sysctl.conf

[root@server2 ~]# sysctl -p
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.lo.arp_ignore = 1
5.2 配置ip地址和路由

[root@server2 ~]# route add -host 192.168.145.101 dev lo:0

[root@server2 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.145.101 0.0.0.0 255.255.255.255 UH 0 0 0 lo

192.168.145.128 0.0.0.0 255.255.255.0 U 0 0 0 eth0

169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0

0.0.0.0 192.168.145.142 0.0.0.0 UG 0 0 0 eth0

5.3 配置Real-server-2的Web服務器:

[root@server2 ~]# rpm -ivh /mnt/cdrom/Server/httpd-2.2.3-31.el5.i386.rpm

[root@server2 ~]#echo "web2 -- real-server-2" > /var/www/html/index.html l

[root@server2 ~]# service httpd start

Starting httpd: httpd: apr_sockaddr_info_get() failed for r2.guirong.vom

httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName

[ OK ]

5.4 客戶端訪問real-server-2的web服務:(橋接)

image

六.客戶端測試lvs-DR模型:

6.1 測試1

關閉node-1的ipvsadm服務,確保以下信息

[root@node1 ~]# service ipvsadm stop
Clearing the current IPVS table: [ OK ]
[root@node1 ~]# service ipvsadm status
ipvsadm is stopped
開啓node-2的ipvsadm服務,確保以下信息

[root@node2 ~]# service ipvsadm restart
Clearing the current IPVS table: [ OK ]
Applying IPVS configuration: [ OK ]
[root@node2 ~]# service ipvsadm status
ipvsadm dead but subsys locked
客戶端訪問node-2的羣集服務服務:(網卡使用橋接模式)

image

客戶端開始不斷刷新,發現web2和web1交替出現,比率爲1:1,說明依次輪詢模式爲RR

image

在node-2上查看信息如下:輪詢調度比幾乎爲1:1;

說明lvs調度方法是用的是RR模式

[root@node2 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr
-> 192.168.145.200:80 Route 1 0 50

-> 192.168.145.201:80 Route 1 0 50

6.2 測試2

關閉node-2的ipvsadm服務,確保以下信息

[root@node2 ~]# service ipvsadm stop
Clearing the current IPVS table: [ OK ]
[root@node2 ~]# service ipvsadm status
ipvsadm is stopped
開啓node-1的ipvsadm服務,確保以下信息

[root@node1 ~]# service ipvsadm start
Clearing the current IPVS table: [ OK ]
Applying IPVS configuration: [ OK ]
[root@node1 ~]# service ipvsadm restart
Clearing the current IPVS table: [ OK ]
Applying IPVS configuration: [ OK ]
[root@node1 ~]# service ipvsadm status
ipvsadm dead but subsys locked
客戶端訪問node-1的羣集服務服務:(網卡使用橋接模式)

image

客戶端開始不斷刷新,發現web2和web1交替出現,比率爲1:1,說明依次輪詢模式爲RR

image

在node-1上查看信息如下:輪詢調度比幾乎爲1:1;

說明lvs調度方法是用的是RR模式

[root@node1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr
-> 192.168.145.200:80 Route 1 0 25

-> 192.168.145.201:80 Route 1 0 25

七:heartbeat服務搭建

7.1首先停止ipvsadm服務:

[root@node1 ~]# service ipvsadm stop

Clearing the current IPVS table: [ OK ]

[root@node1 ~]# service ipvsadm status

ipvsadm is stopped
[root@node2 ~]# service ipvsadm stop

Clearing the current IPVS table: [ OK ]

[root@node2 ~]# service ipvsadm status

ipvsadm is stopped

八.測試:

8.1 使用ip地址測試:

image

[root@node1 ha.d]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP www.a.com:http rr
-> 192.168.145.200:http Route 1 0 7

-> 192.168.145.201:http Route 1 0 7

[root@node2 ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP www.a.com:http rr
-> 192.168.145.200:http Route 1 0 0

-> 192.168.145.201:http Route 1 0 0

[root@node1 ~]# cat /etc/resolv.conf
; generated by /sbin/dhclient-script
nameserver 192.168.145.200
nameserver 192.168.145.201
[root@node2 ~]# cat /etc/resolv.conf
; generated by /sbin/dhclient-script
nameserver 192.168.145.200
nameserver 192.168.145.201
[root@node1 ha.d]# ipvsadm -A -t 192.168.145.101:53 -s rr

[root@node1 ha.d]# ipvsadm -a -t 192.168.145.101:53 -r 192.168.145.200 -g

[root@node1 ha.d]# ipvsadm -a -t 192.168.145.101:53 -r 192.168.145.201 -g

[root@node1 ha.d]# ipvsadm -A -u 192.168.145.101:53 -s rr

[root@node1 ha.d]# ipvsadm -a -u 192.168.145.101:53 -r 192.168.145.200 -g

[root@node1 ha.d]# ipvsadm -a -u 192.168.145.101:53 -r 192.168.145.201 -g

[root@node1 ha.d]# service ipvsadm save
Saving IPVS table to /etc/sysconfig/ipvsadm: [ OK ]

[root@node1 ha.d]# cat /etc/sysconfig/ipvsadm

-A -u 192.168.145.101:53 -s rr
-a -u 192.168.145.101:53 -r 192.168.145.201:53 -g -w 1

-a -u 192.168.145.101:53 -r 192.168.145.200:53 -g -w 1

-A -t 192.168.145.101:53 -s rr
-a -t 192.168.145.101:53 -r 192.168.145.201:53 -g -w 1

-a -t 192.168.145.101:53 -r 192.168.145.200:53 -g -w 1

-A -t 192.168.145.101:80 -s rr
-a -t 192.168.145.101:80 -r 192.168.145.200:80 -g -w 1

-a -t 192.168.145.101:80 -r 192.168.145.201:80 -g -w 1

[root@node2 ~]# ipvsadm -A -t 192.168.145.101:53 -s rr

[root@node2 ~]# ipvsadm -a -t 192.168.145.101:53 -r 192.168.145.200 -g

[root@node2 ~]# ipvsadm -a -t 192.168.145.101:53 -r 192.168.145.201 -g

[root@node2 ~]# ipvsadm -A -u 192.168.145.101:53 -s rr

[root@node2 ~]# ipvsadm -a -u 192.168.145.101:53 -r 192.168.145.200 -g

[root@node2 ~]# ipvsadm -a -u 192.168.145.101:53 -r 192.168.145.201 -g

[root@node2 ~]# service ipvsadm save
Saving IPVS table to /etc/sysconfig/ipvsadm: [ OK ]

[root@node2 ~]#
[root@node2 ~]# cat /etc/sysconfig/ipvsadm

-A -u 192.168.145.101:53 -s rr
-a -u 192.168.145.101:53 -r 192.168.145.201:53 -g -w 1

-a -u 192.168.145.101:53 -r 192.168.145.200:53 -g -w 1

-A -t 192.168.145.101:53 -s rr
-a -t 192.168.145.101:53 -r 192.168.145.201:53 -g -w 1

-a -t 192.168.145.101:53 -r 192.168.145.200:53 -g -w 1

-A -t 192.168.145.101:80 -s rr
-a -t 192.168.145.101:80 -r 192.168.145.200:80 -g -w 1

-a -t 192.168.145.101:80 -r 192.168.145.201:80 -g -w 1

8.2:使用域名http://www.a.com/訪問,

並不斷刷新網頁,以下網頁交替出現

image

在node1上查看信息:

[root@node1 ha.d]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:66:E1:DA

inet addr:192.168.145.101 Bcast:192.168.145.143 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000

[root@node1 ha.d]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 51

-> 192.168.145.201:domain Route 1 0 49

TCP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 0

-> 192.168.145.201:domain Route 1 0 0

TCP www.a.com:http rr

-> 192.168.145.201:http Route 1 0 31

-> 192.168.145.200:http Route 1 0 30

在node2上查看信息:

root@node2 ha.d]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:79:F8:F7

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000

[root@node2 ha.d]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

可以看出此時node1爲主要的調度器,node2爲standby狀態!!

九:模擬node1服務器故障情況,並測試

9.1模擬失效情況:

[root@node1 ha.d]# cd /usr/lib/heartbeat/

[root@node1 heartbeat]# ls
[root@node1 heartbeat]# ./hb_standby # (模擬失效)
2012/04/02_17:00:35 Going standby [all].
在node1上查看信息:

[root@node1 heartbeat]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:66:E1:DA

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000
[root@node1 heartbeat]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

在node2上查看信息:

[root@node2 ha.d]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:79:F8:F7

inet addr:192.168.145.101 Bcast:192.168.145.143 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000
[root@node2 ha.d]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 9

-> 192.168.145.201:domain Route 1 0 9

TCP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 0

-> 192.168.145.201:domain Route 1 0 0

TCP www.a.com:http rr

-> 192.168.145.201:http Route 1 0 0

-> 192.168.145.200:http Route 1 0 0

9.2使用域名http://www.a.com/訪問,

並不斷刷新網頁,以下網頁交替出現

image

[root@node2 ha.d]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 13

-> 192.168.145.201:domain Route 1 0 12

TCP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 0

-> 192.168.145.201:domain Route 1 0 0

TCP www.a.com:http rr

-> 192.168.145.201:http Route 1 0 30

-> 192.168.145.200:http Route 1 0 30

9.3 模擬故障恢復:

[root@node1 heartbeat]# ./hb_takeover
在node1上查看信息:

[root@node1 heartbeat]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:66:E1:DA

inet addr:192.168.145.101 Bcast:192.168.145.143 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000

[root@node1 heartbeat]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 8

-> 192.168.145.201:domain Route 1 0 8

TCP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 0

-> 192.168.145.201:domain Route 1 0 0

TCP www.a.com:http rr

-> 192.168.145.201:http Route 1 0 0

-> 192.168.145.200:http Route 1 0 0

在node2上查看信息:

[root@node2 ha.d]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:79:F8:F7

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000

[root@node2 ha.d]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

9.4使用域名訪問,並不斷刷新網頁,以下網頁交替出現

image

在node1上查看信息:

[root@node1 heartbeat]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 22

-> 192.168.145.201:53 Route 1 0 22

TCP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 0

-> 192.168.145.201:53 Route 1 0 0

TCP 192.168.145.101:80 rr

-> 192.168.145.201:80 Route 1 0 25

-> 192.168.145.200:80 Route 1 0 24

至此在linux下搭建HA和LB集羣成功!!!

十.模擬web服務器故障情況:

10.1 故障測試
10.1.1 查看node1上的HA集羣信息:如下:

(可以看出node1爲主控制器,而且顯示的HA顯示了2個real-server的信息)
[root@node1 ~]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:66:E1:DA

inet addr:192.168.145.101 Bcast:192.168.145.143 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000

[root@node1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 97

-> 192.168.145.201:53 Route 1 0 97

TCP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 0

-> 192.168.145.201:53 Route 1 0 0

TCP 192.168.145.101:80 rr

-> 192.168.145.201:80 Route 1 0 0

-> 192.168.145.200:80 Route 1 0 0

10.1.2 在real-server-1上的停止httpd和named服務,模擬real-server-1服務器故障,如下

[root@server1 ~]# service httpd stop
Stopping httpd: [ OK ]

[root@server1 ~]# service named stop
Stopping named: [ OK ]
[root@server1 ~]#
10.1.3 再次查看node1上的HA集羣信息,如下:

(可以看出在node1顯示了錯誤的的HA的信息,此時real-server-1服務器也不能正常工作,但是在node1上無法發現)
[root@node1 ~]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:66:E1:DA

inet addr:192.168.145.101 Bcast:192.168.145.143 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000

[root@node1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 97

-> 192.168.145.201:53 Route 1 0 97

TCP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 0

-> 192.168.145.201:53 Route 1 0 0

TCP 192.168.145.101:80 rr

-> 192.168.145.201:80 Route 1 0 0

-> 192.168.145.200:80 Route 1 0 0

10.1.4 客戶端不斷刷新網頁,而且只顯示如下網頁,且反應緩慢

(說明:real-server-1服務器已經出現故障)

image

10.1.5 再次查看node1上的HA集羣信息,如下

[root@node1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 113

-> 192.168.145.201:53 Route 1 0 114

TCP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 0

-> 192.168.145.201:53 Route 1 0 0

TCP 192.168.145.101:80 rr

-> 192.168.145.201:80 Route 1 0 16

-> 192.168.145.200:80 Route 1 0 17

(說明:node1依然認爲real-server-1正常工作,並且不斷調度請求給real-server-1,此時已經出現大問題了)

此時的解決方法是:使node能夠知道real-server的工作情況!

10.2 解決web故障問題

10.2.1 在node1上安裝配置heartbeat-lnoded

[root@node1 ~]# cd HA/
[root@node1 HA]# ls
heartbeat-2.1.4-9.el5.i386.rpm
heartbeat-lnoded-2.1.4-9.el5.i386.rpm
heartbeat-pils-2.1.4-10.el5.i386.rpm
heartbeat-stonith-2.1.4-10.el5.i386.rpm
libnet-1.1.4-3.el5.i386.rpm
perl-MailTools-1.77-1.el5.noarch.rpm
[root@node1 HA]# yum localinstall heartbeat-lnoded-2.1.4-9.el5.i386.rpm –nogpgcheck –y

[root@node1 HA]# cp /usr/share/doc/heartbeat-lnoded-2.1.4/lnoded.cf /etc/ha.d

[root@node1 HA]# cd /etc/ha.d/
[root@node1 ha.d]# vim lnoded.cf
21 quiescent=yes
24 virtual=192.168.145.101:80
25 real=192.168.145.200:80 gate

26 real=192.168.145.201:80 gate

27 service=http

28 request=".test.html"

29 receive="ok"

30 virtualhost=www.a.com

31 scheduler=rr

34 protocol=tcp
[root@node1 ha.d]# vim haresources
46 node1.a.com 192.168.145.101 lnoded::lnoded.cf

[root@node1 ha.d]# service heartbeat restart

Stopping High-Availability services:
[ OK ]

Waiting to allow resource takeover to complete:

[ OK ]

Starting High-Availability services:
2012/04/04_11:25:46 INFO: Resource is stopped
[ OK ]

10.2.2 在node2上安裝配置heartbeat-lnoded

[root@node2 ~]# cd HA/
[root@node2 HA]# ls
heartbeat-2.1.4-9.el5.i386.rpm
heartbeat-lnoded-2.1.4-9.el5.i386.rpm
heartbeat-pils-2.1.4-10.el5.i386.rpm
heartbeat-stonith-2.1.4-10.el5.i386.rpm
libnet-1.1.4-3.el5.i386.rpm
perl-MailTools-1.77-1.el5.noarch.rpm
[root@node2 HA]# yum localinstall heartbeat-lnoded-2.1.4-9.el5.i386.rpm –nogpgcheck –y

[root@node2 HA]# cp /usr/share/doc/heartbeat-lnoded-2.1.4/lnoded.cf /etc/ha.d

[root@node2 HA]# cd /etc/ha.d/
[root@node2 ha.d]# vim lnoded.cf
21 quiescent=yes
24 virtual=192.168.145.101:80
25 real=192.168.145.200:80 gate

26 real=192.168.145.201:80 gate

27 service=http

28 request=".test.html"

29 receive="ok"

30 virtualhost=www.a.com

31 scheduler=rr

34 protocol=tcp
[root@node2 ha.d]# vim haresources
46 node1.a.com 192.168.145.101 lnoded::lnoded.cf

[root@node2 ha.d]# service heartbeat restart

Stopping High-Availability services:
[ OK ]

Waiting to allow resource takeover to complete:

[ OK ]

Starting High-Availability services:
2012/04/04_11:25:53 INFO: Resource is stopped
[ OK ]

10.2.3 此時,查看node1上的HA集羣信息,如下:

[root@node1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr
-> 192.168.145.201:80 Route 0 0 0

-> 192.168.145.200:80 Route 0 0 0

(由於在/etc/ha.d/lnoded.cf中21行存在 quiescent=yes,故此處http的權重爲0,即此時不提供服務)

#修改/etc/ha.d/lnoded.cf中21行爲 quiescent=no,會自動加載

[root@node1 ha.d]# vim lnoded.cf
21 quiescent=no
[root@node2 ha.d]# vim lnoded.cf
21 quiescent=no
#並再次查看node1上的HA集羣信息

[root@node1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr

[root@node1 ha.d]#
(由於在/etc/ha.d/lnoded.cf中21行存在 quiescent=no,故此處http的記錄爲空)

10.2.4 此時,real-server-1上添加以下信息:

[root@server1 ~]# echo "ok" >> /var/www/html/.test.html

查看node1上的HA集羣信息,如下:

[root@node1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr

-> 192.168.145.200:80 Route 1 0 0

[root@node1 ha.d]#
此時,real-server-2上添加以下信息:

[root@server2 ~]# echo "ok" >> /var/www/html/.test.html

查看node1上的HA集羣信息,如下:

[root@node1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr

-> 192.168.145.201:80 Route 1 0 0

-> 192.168.145.200:80 Route 1 0 0

[root@node1 ha.d]#
10.2.5 此時,在real-server-1上停止httpd服務,出現以下信息:

[root@server1 ~]# service httpd stop
Stopping httpd: [ OK ]

查看node1上的HA集羣信息,如下

[root@server1 ~]#
[root@node1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr

-> 192.168.145.201:80 Route 1 0 0

在real-server-1上和real-server-2上停止httpd服務,出現以下信息:

[root@server1 ~]# service httpd stop
Stopping httpd: [ OK ]

[root@server1 ~]#
[root@server2 ~]# service httpd stop
Stopping httpd: [ OK ]

[root@server2 ~]#
查看node1上的HA集羣信息,如下

[root@node1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr

10.2.6 在real-server-1上和real-server-2上啓動httpd服務,恢復正常情況

[root@server1 ~]# service httpd start
Starting httpd: httpd: apr_sockaddr_info_get() failed for server1.a.com

httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName

[ OK ]

[root@server1 ~]#
[root@server2 ~]# service httpd start
Starting httpd: httpd: apr_sockaddr_info_get() failed for server2.a.com

httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName

[ OK ]

[root@server2 ~]#
查看node1上的HA集羣信息,如下

[root@node1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr

-> 192.168.145.200:80 Route 1 0 0

-> 192.168.145.201:80 Route 1 0 0

至此在linux下搭建HA和LB集羣成功!!!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章