FastDFS蛋疼的集羣和負載均衡(十八)之LVS+Keepalived雙主模式

diary_report.jpg

Interesting things

我們之前的Keepalived+LVS單主模式,是隻有一臺lvs工作,這會造成資源浪費,可以採用雙主結構,讓兩臺lvs都進行工作,採用dns輪詢方式,當用戶訪問域名通過dns輪詢每天lvs,雙主結構需要2個vip,這2個vip需要綁定域名。

同樣,在每臺lvs上安裝keepalived軟件,當keepalived檢測到其中一個lvs宕機則將宕機的vip漂移到活動lvs上,當lvs恢復則vip又重新漂移回來。

附上我畫的拓撲圖
初始狀態

image.png

其中一個主機宕機

image.png

主機恢復
image.png

所需環境
vip1 192.168.12.101
vip2 192.168.12.102
lvs_master1 192.168.12.12
lvs_master2 192.168.12.13
nginx1 192.168.12.2
nginx2 192.168.12.3
tomcat1 192.168.12.6
tomcat2 192.168.12.7

What did you do today

雙主模式相比主從環境,區別在於:
1.DNS輪詢。
2.LVS負載均衡層需要2個vip。比如192.168.12.12和192.168.12.13
3.後端的realServer上要綁定這2個vip到lo本地迴環設備上
4.keepalived.conf的配置相比於上面的主從模式有所不同。

  • 在192.168.12.2和192.168.12.3機器要綁定2個vip到本地迴環口lo上(分別綁定lo:0和lo:1),所以需要在/etc/init.d/下編寫double_master_lvsdr0和double_master_lvsdr1腳本,具體如下:
#!/bin/sh
VIP=192.168.12.101
. /etc/rc.d/init.d/functions

case "$1" in

start)
    /sbin/ifconfig lo down
    /sbin/ifconfig lo up
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
    /sbin/sysctl -p >/dev/null 2>&1
    /sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 up  
    /sbin/route add -host $VIP dev lo:0
    echo "LVS-DR real server starts successfully.\n"
    ;;
stop)
    /sbin/ifconfig lo:0 down
    /sbin/route del $VIP >/dev/null 2>&1
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "LVS-DR real server stopped.\n"
    ;;
status)
    isLoOn=`/sbin/ifconfig lo:0 | grep "$VIP"`
    isRoOn=`/bin/netstat -rn | grep "$VIP"`
    if [ "$isLoON" == "" -a "$isRoOn" == "" ]; then
        echo "LVS-DR real server has run yet."
    else
        echo "LVS-DR real server is running."
    fi
    exit 3
    ;;
*)
    echo "Usage: $0 {start|stop|status}"
    exit 1
esac
exit 0
#!/bin/sh
VIP=192.168.12.102
. /etc/rc.d/init.d/functions

case "$1" in

start)
    /sbin/ifconfig lo down
    /sbin/ifconfig lo up
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
    /sbin/sysctl -p >/dev/null 2>&1
    /sbin/ifconfig lo:1 $VIP netmask 255.255.255.255 up   
    /sbin/route add -host $VIP dev lo:1
    echo "LVS-DR real server starts successfully.\n"
    ;;
stop)
    /sbin/ifconfig lo:1 down
    /sbin/route del $VIP >/dev/null 2>&1
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "LVS-DR real server stopped.\n"
    ;;
status)
    isLoOn=`/sbin/ifconfig lo:1 | grep "$VIP"`
    isRoOn=`/bin/netstat -rn | grep "$VIP"`
    if [ "$isLoON" == "" -a "$isRoOn" == "" ]; then
        echo "LVS-DR real server has run yet."
    else
        echo "LVS-DR real server is running."
    fi
    exit 3
    ;;
*)
    echo "Usage: $0 {start|stop|status}"
    exit 1
esac
exit 0
  • 將double_master_lvsdr0和double_master_lvsdr1設置開機啓動

    [root@localhost init.d]# chmod +x double_master_lvsdr0
    [root@localhost init.d]# chmod +x double_master_lvsdr1
    [root@localhost init.d]# echo “/etc/init.d/double_master_lvsdr0” >> /etc/rc.d/rc.local
    [root@localhost init.d]# echo “/etc/init.d/double_master_lvsdr1” >> /etc/rc.d/rc.local
    image.png

  • 啓動double_master_lvsdr0和double_master_lvsdr1腳本
    image.png

  • 查看192.168.12.2和192.168.12.3,發現vip已經成功綁定到本地迴環口lo上了。
    ![image.png
    ](http://upload-images.jianshu.io/upload_images/4636177-adea5a03fe4e034f.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

  • 在lvs_master1和lvs_master2打開ip_forward路由轉發功能

    [root@localhost ~]# echo “1” > /proc/sys/net/ipv4/ip_forward

  • lvs_master1上的keepalived.conf配置如下:

! Configuration File for keepalived

global_defs {
   router_id LVS_MASTER
}

vrrp_script check_lvs {
   script "/etc/keepalived/lvs_check.sh"
   interval 2
   weight  -20
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }

    track_script {
        check_lvs
    }

    virtual_ipaddress {
        192.168.12.101
    }
}

vrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 52
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }

    track_script {
        check_lvs
    }

    virtual_ipaddress {
        192.168.12.102
    }
}

virtual_server 192.168.12.101 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #nat_mask 255.255.255.0
    persistence_timeout 50
    protocol TCP


    real_server 192.168.12.2 80 {
        weight 3
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
    real_server 192.168.12.3 80 {
        weight 3
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
}


virtual_server 192.168.12.102 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #nat_mask 255.255.255.0
    persistence_timeout 50
    protocol TCP


    real_server 192.168.12.2 80 {
        weight 3
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
    real_server 192.168.12.3 80 {
        weight 3
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
}
  • lvs_master2中的keepalived配置如下:
! Configuration File for keepalived

global_defs {
   router_id LVS_BACKUP
}

vrrp_script check_lvs {
   script "/etc/keepalived/lvs_check.sh"
   interval 2
   weight  -20
}

vrrp_instance VI_1 {
    state BACKUP        
    interface eth0           
    virtual_router_id 51     
    priority 90             
    advert_int 1             
    authentication {
        auth_type PASS       
        auth_pass 1111       
    }

    track_script {
        check_lvs
    }

    virtual_ipaddress {
        192.168.12.101       
    }
}

vrrp_instance VI_2 {
    state MASTER         
    interface eth0         
    virtual_router_id 52 
    priority 100          
    advert_int 1          
    authentication {
        auth_type PASS     
        auth_pass 1111     
    }

    track_script {
        check_lvs
    }

    virtual_ipaddress {
        192.168.12.102   
    }
}

virtual_server 192.168.12.101 80 {
    delay_loop 6             
    lb_algo wrr              
    lb_kind DR               
    #nat_mask 255.255.255.0
    persistence_timeout 50   
    protocol TCP            


    real_server 192.168.12.2 80 {
        weight 3
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
    real_server 192.168.12.3 80 {
        weight 3
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
}


virtual_server 192.168.12.102 80 {
    delay_loop 6             
    lb_algo wrr              
    lb_kind DR               
    #nat_mask 255.255.255.0
    persistence_timeout 50   
    protocol TCP            


    real_server 192.168.12.2 80 {
        weight 3
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
    real_server 192.168.12.3 80 {
        weight 3
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
}
  • 編寫lvs_check.sh腳本。
a=`ipvsadm -ln`
str="Route"
bb=`echo $aa|grep $str|wc -l`
if [ $bb = 0 ];then     
    sleep 3
    aa=`ipvsadm -ln`
    bb=`echo $aa|grep $str|wc -l`
    if [ $bb = 0 ];then
        killall keepalived
    fi
fi
  • 啓動192.168.12.2和192.16812.3的nginx、double_master_lvsdr0、double_master_lvsdr1服務。啓動192.168.12.6和192.168.12.7的tomcat。

  • 我們查看lvs_master1的eth0節點信息,發現綁定了vip1(192.168.12.101)
    image.png

  • 查看lvs_master2的eth0節點信息,發現綁定了vip2(192.168.12.102)
    image.png

  • 查看lvs_master1的lvs以及realserver的信息。
    image.png

  • 查看lvs_master2的lvs以及realserver的信息
    image.png

  • 修改hosts文件(C:\Windows\System32\drivers\etc\hosts),指定cmaxiaoma.mayday.com對應的2個vip。(在hosts裏這樣設置,達不到負載均衡,只會優先訪問192.168.12.101)
    image.png

  • 訪問cmazxiaoma.mayday.com
    image.png

  • 我們停止掉lvs_master1.
    image.png

  • 當我們又恢復lvs_master1.vip1又回到了lvs_master1手裏,而lvs_master2又失去了vip1。
    image.png
    image.png

  • 在lvs_master2的eth0以及lvs、RealServer信息。
    image.png
    image.png

Summary

今天加班就到這裏了,回去要照顧女朋友了!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章