1、本文的架構圖:
闡述各服務器用途:
1、haproxy在本構架中實現的是:負載均衡
2、keepalived實現對haproxy的高可用
3、apache static 實現靜態頁面的訪問
4、aoache dynamic實現動態頁面的訪問,圖中有兩個是實現負載均衡的
配置各功能模塊:
一、配置haproxy和keepalived
驗證:
1、當一臺keepalived宕機後,VIP會不會轉移到另外一臺服務器
2、當一臺haproxy服務出故障,VIP會不會轉移到另外一臺服務器
注意:
那如果keepalived宕機了,haproxy服務還正常運行,我們要不要讓另外一臺服務器把VIP奪過去呢?
理論上來講:最好不要,但是我們的keepalived中的腳本監控着haproxy的進程,keepalived宕機之後,就無從得知haproxy的健康狀態,也不能決定自己的優先權priority降不降低了。所以,理論上來講最好不要,但是實際中光靠keepalived是做不到的。
配置:
1、給兩臺服務器分別安裝上keepalived
[root@station139 ~]# yum -y install keepalived
2、配置keepalived
[root@node2 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost 配置服務狀態變化發送郵件到哪個地址 } notification_email_from kaadmin@localhost smtp_server 127.0.0.1 給哪個smtp服務器發郵件 smtp_connect_timeout 30 聯繫上面smtp服務器30秒聯繫不上,就超時 router_id LVS_DEVEL } vrrp_script chk_haproxy { 本腳本是用來檢測該服務器上haproxy服務的健康狀態的 script "killall -0 haproxy" interval 1 weight -2 } vrrp_instance VI_1 { state MASTER 這太服務器爲主的keepalived interface eth0 通過eth0網卡廣播 virtual_router_id 200 虛擬路由id要改,如果在一個局域網中有多個keepalived集羣 priority 100 優先級 advert_int 1 authentication { auth_type PASS auth_pass 11112222 } track_script { chk_haproxy } virtual_ipaddress { 192.168.1.200 本機的虛擬IP } notify_master "/etc/keepalived/notify.sh master" 各不用狀態下運行的腳本 notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } vrrp_instance VI_2 { 另外一臺主keepalived的從 state BACKUP interface eth0 virtual_router_id 57 priority 99 設置要比另外一臺主keepalived的優先級低 advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_mantaince_down } virtual_ipaddress { 192.168.1.201 } }
3、寫keepalived處在不同狀態下所運行的腳本
#!/bin/bash # Author: MageEdu <[email protected]> # description: An example of notify script # vip=192.168.1.200 contact='root@localhost' notify() { mailsubject="`hostname` to be $1: $vip floating" mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1" echo $mailbody | mail -s "$mailsubject" $contact } case "$1" in master) notify master /etc/rc.d/init.d/haproxy start exit 0 ;; backup) notify backup /etc/rc.d/init.d/haproxy stop exit 0 ;; fault) notify fault /etc/rc.d/init.d/haproxy stop exit 0 ;; *) echo 'Usage: `basename $0` {master|backup|fault}' exit 1 ;; esac 給腳本以執行權限: chmod +x /etc/keepalived/notify.sh
4、配置haproxy
因爲要實現動靜分離,那麼我們在配置文件中,就要定義動態資源靜態資源轉移到不同的服務上去
[root@node2 ~]# yum -y install haproxy 安裝haproxy [root@node2 ~]# vim /etc/haproxy/haproxy.cfg # log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http 指定haproxy工作模式爲http log global option httplog option dontlognull option http-server-close 當客戶端超時時,允許服務端斷開連接 option forwardfor except 127.0.0.0/8 在http的響應頭部加入forwardfor option redispatch #在使用了基於cookie的會話保持的時候,通常加這麼一項,一旦後端某一server宕機時,能夠將其會話重新派發到其它的upstream servers retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # main frontend which proxys to the backends #--------------------------------------------------------------------- frontend main *:80 前端代理 acl url_static path_beg -i /static /p_w_picpaths /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js acl url_dynamic path_end -i .php use_backend static if url_static default_backend dynamic #--------------------------------------------------------------------- # static backend for serving up p_w_picpaths, stylesheets and such #--------------------------------------------------------------------- backend static 後端的靜態請求響應 balance roundrobin server static 192.168.1.100:80 inter 3000 rise 2 fall 3 check maxconn 5000 #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend dynamic 後端的動態請求響應 balance roundrobin server dynamic1 192.168.1.101:80 inter 3000 rise 2 fall 3 check maxconn 5000 server dynamic2 192.168.1.102:80 inter 3000 rise 2 fall 3 check maxconn 5000 listen statistics mode http bind *:8080 ~ stats enable stats auth admin:admin stats uri /admin?stats 指定URI的訪問路徑 stats admin if TRUE stats hide-version stats refresh 5s acl allow src 192.168.0.0/24 定義訪問控制列表 tcp-request content accept if allow tcp-request content reject
5、配置另外一臺haproxy服務器
因爲兩臺服務器的配置大體相同,我們就直接講以上配置好的複製文件和腳本文件都傳到這臺haproxy服務器上,做下修就可以了
[root@node2 ~]# scp /etc/keepalived/keepalived.conf [email protected]:/etc/keepalived/ [email protected]'s password: keepalived.conf 100% 4546 4.4KB/s 00:00 [root@node2 ~]# scp /etc/keepalived/notify.sh [email protected]:/etc/keepalived/ [email protected]'s password: notify.sh 100% 751 0.7KB/s 00:00 [root@node2 ~]# scp /etc/haproxy/haproxy.cfg [email protected]:/etc/haproxy/ [email protected]'s password: haproxy.cfg 100% 3529 3.5KB/s 00:00
傳輸完成,接着來配置 /etc/keepalived/keepalived.conf 因爲兩個節點上的/etc/haproxy/haproxy.cfg相同不用更改
interface eth0 ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from kaadmin@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_script chk_haproxy { script "killall -0 haproxy" interval 1 weight -2 } vrrp_instance VI_1 { state BACKUP 這臺把master改成 backup interface eth0 virtual_router_id 200 priority 99 優先級調的比上一個低 advert_int 1 authentication { auth_type PASS auth_pass 11112222 } track_script { chk_haproxy } virtual_ipaddress { 192.168.1.200 } } vrrp_instance VI_2 { state MASTER 本臺的這個要調成MASTER,上個是backup interface eth0 virtual_router_id 57 priority 100 這個優先級也要高於上個 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.1.201 } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" }
注意:
notify_master
"/etc/keepalived/notify.sh master"
notify_backup
"/etc/keepalived/notify.sh backup"
notify_fault
"/etc/keepalived/notify.sh fault"
3個狀態分別要執行的腳本,只能放在 MASTER中,原因是:因爲是互爲主從,每個主的都會有個另外一個主的從,如果
把這 “3個狀態執行腳本” 寫入到從的區域中,那麼另外一個主的從狀態就會執行這個腳本,因爲就會停掉所要高可用的
程序,這就造成了,兩個VIP全部轉移到其中一個服務器上去。
我們來驗證下,如果keepalived和haproxy分別宕機,vip會不會轉移:
在兩個節點上都啓動 keepalived和haproxy服務
[root@node2 ~]# service haproxy start Starting haproxy: [ OK ] [root@node2 ~]# service keepalived start Starting keepalived: [ OK ]
以下爲正常情況:
keepalived 1:
keepalived 2:
我們來模擬讓第一個haproxy停止掉,再看下,VIP會不會全到 keepalived 2上面去:
[root@node2 ~]# service haproxy stop Stopping haproxy: [ OK ]
查看keepalived 1 和 keepalived 2
看,都過來了。。。。
驗證負載均衡很動靜分離
我們給3個web服務不同的網頁
1、給apache static一個靜態頁面,來驗證如果請求的不是以 .php結尾的網頁文件都定向到這太服務器上來
2、給apache dynamic 1 、2 分別兩個 index.php ,實現對動態網頁的負載均衡
我們給apache static 一個符合-i .jpg .gif .png .css .js的網頁,就給個圖片網頁吧
apache static
scp 1.png [email protected]:/var/www/html
apache dynamic 1
vim /var/www/html/index.php 192.168.1.101 <?php phpinfo(); ?>
apache dynamic 2
vim /var/www/html/index.php 192.168.1.102 <?php phpinfo(); ?>
1、我們來請求 1.png 結尾的靜態文件
2、我們來請求 .php結尾的頁面
如此看來,已經對以.php的動態頁面做了負載均衡了
我們再通過 192.168.1.201 這個虛擬 ip 訪問試試:
由此看來,也實現了雙主模型了,兩個haproxy同時可以服務了。。。
3、我們來看看狀態頁面