HA Cluster配置前提:
1、本機的主機名,要與hostname(uname -n)獲得的名稱保持一致;
CentOS 6: /etc/sysconfig/network
CentOS 7: hostnamectl set-hostname HOSTNAME
各節點要能互相解析主機名;一般建議通過hosts文件進行解析(防止DNS服務無法訪問);
2、各節點時間同步;
3、確保iptables及selinux不會成爲服務阻礙;
keepalived是vrrp協議在Linux主機上以守護進程方式的實現,能夠根據配置文件自動生成ipvs規則;
可以對各RS做健康狀態檢測;
配置文件的組成部分:keepalived.conf文件
1.GLOBAL CONFIGURATION
2.VRRPD CONFIGURATION
vrrp instance
vrrp synchonization group
3.LVS CONFIGURATION
獲取幫助:man keepalived.conf
keepalived默認不輸出日誌解決:
編輯/etc/sysconfig/keepalived修改爲
KEEPALIVED_OPTIONS="-D -S 3"
編輯/etc/rsyslog.conf添加一行
local3.* /var/log/keepalived.log
重啓keepalived服務和rsyslog服務即可
systemctl restart rsyslog.service
systemctl restart keepalived.service
使用systemctl status keepalived可以查看詳細
示例:
global_defs { notification_email { [email protected] [email protected] #指明多個服務監控中收集的信息發送給哪些收件人 [email protected] } notification_email_from [email protected]#指明發件人 smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL #vrrp_mcast_group1 224.0.0.100 #指定節點間傳遞心跳的多播地址,如果多實例運行,不要全局指定,在每個vrrp實例中指定或者不指定 } vrrp_instance VI_1 { state MASTER #初始狀態 master和backup兩種 interface eth0 #流動ip綁定於那塊網卡上 #use_vmac <VMAC_INTERFACE> 指定虛擬mac地址,可選 virtual_router_id 51 #虛擬路由組自己的ID號,用於區分多個虛擬路由組,須唯一 priority 100 #搶佔模式下,即使state定義爲master,而自身優先級不高還是會被搶佔 advert_int 1 #每隔多少秒向外發送一次心跳信息 authentication { auth_type PASS 認證方式PASS表示簡單字符認證,還有MD5認證 auth_pass 1111 認證密碼 (可以使用openssl命令生成Openssl rand -hex 4) } virtual_ipaddress { 192.168.200.16 192.168.200.17 #虛擬ip地址,下面的示例表示可以給出詳細地址信息,如別名,設備等 192.168.200.18 } nopreempt #非搶佔模式;默認爲搶佔模式; }
virtual_ipaddress { <IPADDR>/<MASK> brd <IPADDR> dev <STRING> scope <SCOPE> label <LABEL> 192.168.200.17/24 dev eth1 192.168.200.18/24 dev eth2 label eth2:1 }
不重啓服務,手動讓主節點成爲備節點:
#在vrrp上下文之外定義一個腳本實現: vrrp_script chk_mantaince_down { script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" interval 1 #多久檢測一次 weight -2 #檢測到出現down文件時自身優先級減少幾 } #在vrrp上下文中調用 track_script { chk_mantaince_down }
同步組定義:當我們基於keepalive做高可用,又爲keepalive本身提供了負載均衡,這時需要定義兩個虛擬路由分別負責外網和內網,外網ip移動到另一臺主機時,內網ip也需要流動,內網ip移動到另一主機時,外網同理也需要移動。最後將兩個虛擬路由歸併到一個組中,如下圖LVS-NAT模型負載均衡就需要這樣做
vrrp_sync_group VG_1 { group { VI_1 # name of vrrp_instance (below) VI_2 # One for each moveable IP. } } vrrp_instance VI_1 { eth0 #外網網卡 vip } vrrp_instance VI_2 { eth1 #內網網卡 dip }
在virtual instance中的主機狀態發生改變時發送通知:
# notify scripts, alert as above notify_master <STRING>|<QUOTED-STRING> #當前主機轉換爲主節點時發送通知 notify_backup <STRING>|<QUOTED-STRING> #當前主機轉換爲備節點時發送通知 notify_fault <STRING>|<QUOTED-STRING> #當前主機故障時發送通知 notify <STRING>|<QUOTED-STRING> #自行指明 smtp_alert 例如:(在VI上下文中定義) notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault"
腳本簡單示例:
vip=172.16.20.100 contact='root@localhost' notify() { mailsubject="`hostname` to be $1: $vip floating" mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1" echo $mailbody | mail -s "$mailsubject" $contact } case "$1" in master) notify master # /etc/rc.d/init.d/keepalived start exit 0 ;; backup) notify backup # /etc/rc.d/init.d/keepalived stop exit 0 ;; fault) notify fault # /etc/rc.d/init.d/keepalived stop exit 0 ;; *) echo 'Usage: `basename $0` {master|backup|fault}' exit 1 ;; esac
案例1:lvs-dr+keepalived實現負載均衡和高可用
1.初始化兩個real server配置:在r1和r2上分別執行./lvs.sh start #!/bin/bash # case $1 in start) echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce ;; stop) echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce ;; esac 2.爲兩個real server配置vip,添加路由: [root@node1 ~]# ifconfig lo:0 192.168.20.100 netmask 255.255.255.255 broadcast 192.168.20.100 up [root@node1 ~]# route add -host 192.168.20.100 dev lo:0 測試階段:使用keepalived前先測試lvs是好用的 其中一個director上配置vip: ip addr add 192.168.20.100/32 dev eno50332208 [root@node3 ~]# ip addr list eno50332208 | grep 100 4: eno50332208: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.20.100/32 scope global eno50332208 director上添加規則: [root@node3 ~]# ipvsadm -A -t 192.168.20.100:80 -s rr [root@node3 ~]# ipvsadm -a -t 192.168.20.100:80 -r 192.168.20.7 -g -w 1 [root@node3 ~]# ipvsadm -a -t 192.168.20.100:80 -r 192.168.20.8 -g -w 1 另一臺主機訪問虛擬ip發現以輪詢: [root@node4 ~]# curl 192.168.20.100 httpd on node1 [root@node4 ~]# curl 192.168.20.100 httpd on node3 3.在兩個director上安裝httpd作爲sorry server yum install httpd 配置sorry server頁面 echo "sorry , maintannancing,here is director1" > /var/www/html/index.html echo "sorry , maintannancing,here is director2" > /var/www/html/index.html 4.配置keepalived yum install keepalived keepalived.conf文件: ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from leeha@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_script chk_mantaince_down { script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" interval 1 weight -2 } vrrp_instance VI_1 { state MASTER #第二個director上定義爲BACKUP interface eno50332208 virtual_router_id 51 priority 100 #第二個director上定以爲99 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.20.100 dev eno50332208 } track_script { chk_mantaince_down } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } #ipvs配置: virtual_server 192.168.20.100 80 { delay_loop 6 lb_algo wrr lb_kind DR nat_mask 255.255.0.0 persistence_timeout 0 protocol TCP sorry_server 127.0.0.1 80 #配置sorry server #real server健康監測,使用HTTP_GET real_server 192.168.20.7 80 { weight 1 HTTP_GET { url { path / status_code 200 } # url { &n`sp; # path /mrtg/ # digest 9b3a0c85a887a256d6939da88aabd8cd #} connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } #real server健康監測 real_server 192.168.20.8 80 { weight 1 HTTP_GET { url { path / status_code 200 } # url { # path /mrtg/ # digest 9b3a0c85a887a256d6939da88aabd8cd #} connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } TIPS:健康監測也可以用tcp_check ## TCP_CHECK { ## connect_timeout 3 ## } 給予notify腳本: 使用上文中腳本簡單示例給出的即可 兩個director上啓動keepalived 測試: 1.抓包查看: tcpdump -i eno50332208 -nn host 192.168.20.1 2.lvs規則自動根據keepalived中配置生成了 [root@node3 keepalived]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.20.100:80 wrr -> 192.168.20.7:80 Route 1 0 0 -> 192.168.20.8:80 Route 1 0 0 3.vip自動配置上去了 [root@node3 keepalived]# ip addr list | grep 100/32 inet 192.168.20.100/32 scope global eno50332208 4.通知郵件收到: Message 23: From [email protected] Wed Oct 21 02:23:16 2015 Return-Path: <[email protected]> X-Original-To: root@localhost Delivered-To: [email protected] Date: Wed, 21 Oct 2015 02:23:16 -0700 To: [email protected] Subject: node3.lee.com to be master: 192.168.20.100 floating User-Agent: Heirloom mailx 12.5 7/5/10 Content-Type: text/plain; charset=us-ascii From: [email protected] (root) Status: RO 2015-10-21 02:23:16: vrrp transition, node3.lee.com changed to be master 5.讓192.168.20.7這個real server下線:service httpd stop 在director上查看發現規則被刪除 [root@node3 keepalived]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.20.100:80 wrr -> 192.168.20.8:80 Route 1 0 0 You have new mail in /var/spool/mail/root 6.讓第一個director下線,測試 在director1的/etc/keepalived/下創建down文件,發現地址轉移到第二個director上,訪問real server成功 7.讓兩個real server都下線,看看sorry server是否生效 [root@node3 keepalived]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.20.100:80 wrr -> 127.0.0.1:80 Route 1 0 0
案例二:keepalived+nginx實現高可用負載均衡web
負載均衡nginx 配置兩個節點nginx的實現後端主機負載均衡 upstreamupservers { server 192.168.20.7 weight=1; server 192.168.20.8 weight=2; } 調用: location/ { proxy_pass http://upservers/; index index.html index.htm; proxy_set_header Host $host; proxy_set_header x-Real-IP$remote_addr; } tips:Killall -0 nginx 可以判斷某個進程是否在線 keepalived.conf配置: ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from leeha@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_script chk_mantaince_down { script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" interval 1 weight -2 } #寫一個腳本使keepalived監控nginx服務 vrrp_script chk_nginx { script "killall -0 nginx &> /dev/null" interval 1 weight -10 } vrrp_instance VI_1 { state MASTER #第二個節點改爲BACKUP interface eno50332208 virtual_router_id 51 priority 100 #第二個節點改成99 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.20.100 dev eno50332208 label eno50332208:0 } track_script { chk_mantaince_down } #調用上面那個檢測nginx的腳本 track_script { chk_nginx } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } 如果使用雙主則再加一個VI,這樣就實現兩個節點nginx都在提供服務,萬一其中一個down了另一個就承載兩個vip 雙主情況下在notify腳本中就不能定義systemctl restart nginx.service,因爲這樣備節點ginx掛了會影響主節點nginx正常提供服務 vrrp_instance VI_2 { state BACKUP interface eno50332208 virtual_router_id 61 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 2222 } virtual_ipaddress { 192.168.20.111 dev eno50332208 label eno50332208:1 } track_script { chk_mantaince_down } track_script { chk_nginx } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } 定義好後重啓keepalived和nginx服務,主節點上有了vip 192.168.20.100,因爲上面腳本設置了監控nginx服務當主節點的nginx服務down了,vip轉移到備節點 配置notify腳本: #!/bin/bash # Author: MageEdu <[email protected]> # description: An example of notify script # vip=192.168.20.100 contact='root@localhost' notify() { mailsubject="`hostname` to be $1: $vip floating" mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1" echo $mailbody | mail -s "$mailsubject" $contact } case "$1" in master) notify master #systemctl restart nginx.service #這個是作用只要主節點只要在線就一定使用主節點 exit 0 ;; backup) notify backup #systemctl restart nginx.service exit 0 ;; fault) notify fault exit 0 ;; *) echo 'Usage: `basename $0` {master|backup|fault}' exit 1 ;; esac