kvm虛擬機部署高可用負載均衡集羣(1)

1. 概述

本篇博客主要記錄使用kvm虛擬機部署一個高可用負載均衡集羣的過程。

高可用軟件:keeaplived,負載均衡軟件:lvs

lvs主要用來實現對後端服務訪問的負載均衡調度,比如後端的80端口服務,22端口服務,443端口服務。而高可用軟件keepalived用來對lvs節點實現高可用,避免單點故障,導致業務訪問中斷

2. 部署過程

本篇博客使用2臺虛擬機node13,node14做負載均衡熱備集羣。即node13和node14共同提供高可用的負載均衡服務。使用node15,node16作爲後端的服務節點,對外提供sshd服務。要求node13,node14爲node15和node16上面的22端口訪問做負載均衡。

2.1 配置負載均衡器(節點)

根據規劃,node13和node14作爲負載均器,應該部署ipvsadm和keepalived。

以下操作在虛擬機node13和node14均操作。安裝過程如下:

yum -y install ipvsadm keepalived

在node13或者node14執行:ipvsadm -ln,結果如下:

$ ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

node13作爲負載均衡主服務節點,vim /etc/keepalived/keepalived.conf配置keepalived.conf,內容如下:

global_defs {
   router_id LVS_MASTER
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.188
    }
}

virtual_server 192.168.80.188 22 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    !persistence_timeout 50 #如果負載22端口,這個參數需要取消
    protocol TCP

    real_server 192.168.80.15 22 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            delay_before_retry 3
            connect_port 22
        }
    }

    real_server 192.168.80.16 22 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            delay_before_retry 3
            connect_port 22
        }
    }
}

node14作爲負載均衡的備份節點,其keepalived.conf的配置文件內容:

global_defs {
   router_id LVS_BACKUP
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.188
    }
}

virtual_server 192.168.80.188 22 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    !persistence_timeout 50 #如果負載22端口,這個參數需要取消
    protocol TCP

    real_server 192.168.80.15 22 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            delay_before_retry 3
            connect_port 22
        }
    }

    real_server 192.168.80.16 22 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            delay_before_retry 3
            connect_port 22
        }
    }
}

在node13和node14分別執行:

systemctl start keepalived 
systemctl enable keepalived

node13和node14分別執行:vim /etc/rc.local

touch /var/lock/subsys/local
echo "1" > /proc/sys/net/ipv4/ip_forward
exit 0

chmod +x /etc/rc.local

2.2 配置服務節點啓動腳本

以下操作需要在被負載的節點操作:node15和node16

接下來創建lvs的啓動腳本,vim /etc/init.d/realserver,內容如下:

#!/bin/sh
VIP=192.168.80.188
. /etc/rc.d/init.d/functions
    
case "$1" in
start)
    /sbin/ifconfig lo down
    /sbin/ifconfig lo up
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
    /sbin/sysctl -p >/dev/null 2>&1
    /sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 up
    /sbin/route add -host $VIP dev lo:0
    echo "LVS-DR real server starts successfully.\n"
    ;;
stop)
    /sbin/ifconfig lo:0 down
    /sbin/route del $VIP >/dev/null 2>&1
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "LVS-DR real server stopped.\n"
    ;;
status)
    isLoOn=`/sbin/ifconfig lo:0 | grep "$VIP"`
    isRoOn=`/bin/netstat -rn | grep "$VIP"`
    if [ "$isLoON" == "" -a "$isRoOn" == "" ]; then
        echo "LVS-DR real server has run yet."
    else
        echo "LVS-DR real server is running."
    fi
    exit 3
    ;;
*)
    echo "Usage: $0 {start|stop|status}"
    exit 1
esac
exit 0

執行:chmod +x /etc/init.d/realserver

執行:service realserver start

3.測試

[root@node13][~]
$ ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.80.188:22 rr
  -> 192.168.80.15:22             Route   1      0          2         
  -> 192.168.80.16:22             Route   1      0          2   

執行2次:ssh [email protected],發現分別登陸node15和node16

[root@node13][~]
$ ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.80.188:22 rr
  -> 192.168.80.15:22             Route   1      1          2         
  -> 192.168.80.16:22             Route   1      1          2  

執行:virsh destroy node13關閉主服務節點,服務會被node14接管,此時與node15和node16的ssh連接均斷開,重新連接後,

在node14執行:ipvsadm -ln,能夠看到新的連接

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章