haproxy實現負載均衡,七層以及四層

Haproxy 基礎

軟件:haproxy---主要是做負載均衡的7層,也可以做4層負載均衡
apache也可以做7層負載均衡,但是很麻煩。實際工作中沒有人用。
負載均衡是通過OSI協議對應的
7層負載均衡:用的7層http協議,
4層負載均衡:用的是tcp協議加端口號做的負載均衡

-----------------------------------------------------------------------------------------
ha-proxy概述
ha-proxy是一款高性能的負載均衡軟件。因爲其專注於負載均衡這一些事情,因此與nginx比起來在負載均衡這件事情上做更好,更專業。

ha-proxy的特點
ha-proxy 作爲目前流行的負載均衡軟件,必須有其出色的一面。下面介紹一下ha-proxy相對LVS,Nginx等負載均衡軟件的優點。

•支持tcp / http 兩種協議層的負載均衡,使得其負載均衡功能非常豐富。
•支持8種左右的負載均衡算法,尤其是在http模式時,有許多非常實在的負載均衡算法,適用各種需求。
•性能非常優秀,基於單進程處理模式(和Nginx類似)讓其性能卓越。
•擁有一個功能出色的監控頁面,實時瞭解系統的當前狀況。
•功能強大的ACL支持,給用戶極大的方便。

haproxy算法:
1.roundrobin
基於權重進行輪詢,在服務器的處理時間保持均勻分佈時,這是最平衡,最公平的算法.此算法是動態的,這表示其權重可以在運行時進行調整.
2.static-rr
基於權重進行輪詢,與roundrobin類似,但是爲靜態方法,在運行時調整其服務器權重不會生效.不過,其在後端服務器連接數上沒有限制
3.leastconn
新的連接請求被派發至具有最少連接數目的後端服務器.

1、Haproxy 實現七層負載

Keepalived + Haproxy
=================================================================================
/etc/haproxy/haproxy.cfg
global												      //關於進程的全局參數
    log         		    127.0.0.1 local2 info  #日誌服務器
    pidfile     		    /var/run/haproxy.pid  #pid文件
    maxconn     	4000     #最大連接數
    user        		    haproxy   #用戶
    group       	    haproxy      #組
    daemon			#守護進程方式後臺運行
    nbproc 1		#工作進程數量  cpu內核是幾就寫幾
defaults 段用於爲其它配置段提供默認參數
listen是frontend和backend的結合體

frontend        虛擬服務VIrtual Server
backend        真實服務器Real Server

調度器可以同時爲多個站點調度,如果使用frontend、backend的方式:
frontend1 backend1
frontend2 backend2
frontend3 backend3
Keepalived + Haproxy
=================================================================================


拓撲結構

							[vip: 192.168.246.17]

						[LB1 Haproxy]		[LB2 Haproxy]
						192.168.246.169	    192.168.246.161

				       [httpd]				      [httpd] 
				    192.168.246.162		         192.168.246.163

一、Haproxy實施步驟				
1. 準備工作(集羣中所有主機)
[root@ha-proxy-master ~]# cat /etc/hosts
127.0.0.1      	localhost
192.168.246.169	ha-proxy-master
192.168.246.161	ha-proxy-slave
192.168.246.162	test-nginx1 
192.168.246.163	test-nginx2

2. RS配置
配置好網站服務器,測試所有RS,所有機器安裝nginx
[root@test-nginx1 ~]# yum install -y nginx
[root@test-nginx1 ~]# systemctl start nginx
[root@test-nginx1 ~]# echo "test-nginx1" >> /usr/share/nginx/html/index.html
# 所有nginx服務器按順序輸入編號,方便區分。
3. 調度器配置Haproxy(主/備)都執行
[root@ha-proxy-master ~]# yum -y install haproxy
[root@ha-proxy-master ~]# cp -rf /etc/haproxy/haproxy.cfg{,.bak}
[root@ha-proxy-master ~]# sed -i -r '/^[ ]*#/d;/^$/d' /etc/haproxy/haproxy.cfg
[root@ha-proxy-master ~]# vim /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2 info
    pidfile     /var/run/haproxy.pid
    maxconn     4000   #優先級低
    user        haproxy
    group       haproxy
    daemon               #以後臺形式運行ha-proxy
    nbproc 1		    #工作進程數量  cpu內核是幾就寫幾
defaults
    mode                    http  #工作模式 http ,tcp 是 4 層,http是 7 層	
    log                     global
    retries                 3   #健康檢查。3次連接失敗就認爲服務器不可用,主要通過後面的check檢查
    option                  redispatch  #服務不可用後重定向到其他健康服務器。
    maxconn                 4000  #優先級中
    contimeout	            5000  #ha服務器與後端服務器連接超時時間,單位毫秒ms
    clitimeout	            50000 #客戶端超時
    srvtimeout	            50000 #後端服務器超時
listen stats
    bind			*:81
    stats                   	enable
    stats uri              	/haproxy  #使用瀏覽器訪問 http://192.168.246.169/haproxy,可以看到服務器狀態  
    stats auth           	qianfeng:123  #用戶認證,客戶端使用elinks瀏覽器的時候不生效
frontend  web
    mode                   	http  
    bind                    	    *:80   #監聽哪個ip和什麼端口
    option                  httplog		#日誌類別 http 日誌格式
    acl html url_reg  -i  \.html$  #1.訪問控制列表名稱html。規則要求訪問以html結尾的url(可選)
    use_backend httpservers if  html #2.如果滿足acl html規則,則推送給後端服務器httpservers
    default_backend    httpservers   #默認使用的服務器組
backend httpservers    #名字要與上面的名字必須一樣
    balance     roundrobin  #負載均衡的方式
    server  http1 192.168.246.162:80 maxconn 2000 weight 1  check inter 1s rise 2 fall 2
    server  http2 192.168.246.163:80 maxconn 2000 weight 1  check inter 1s rise 2 fall 2
將配置文件拷貝到slave服務器
[root@ha-proxy-master ~]# scp  /etc/haproxy/haproxy.cfg 192.168.246.161:/etc/haproxy/
兩臺機器啓動設置開機啓動
[root@ha-proxy-master ~]# systemctl start haproxy
[root@ha-proxy-master ~]# systemctl enable haproxy

4.測試主/備(瀏覽器訪問)
主:192.168.246.169:81/haproxy
備:192.168.246.161:81/haproxy

頁面主要參數解釋
Queue
Cur: current queued requests //當前的隊列請求數量
Max:max queued requests     //最大的隊列請求數量
Limit:           //隊列限制數量

Errors
Req:request errors             //錯誤請求
Conn:connection errors          //錯誤的連接

Server列表:
Status:狀態,包括up(後端機活動)和down(後端機掛掉)兩種狀態
LastChk:    持續檢查後端服務器的時間
Wght: (weight) : 權重
========================================================
2.測試訪問
通過訪問haparoxy的ip地址訪問到後端服務器
# curl http://192.168.246.169
如果出現bind失敗的報錯,執行下列命令
setsebool -P haproxy_connect_any=1
二、Keepalived實現調度器HA
注:主/備調度器均能夠實現正常調度
1. 主/備調度器安裝軟件
[root@ha-proxy-master ~]# yum install -y keepalived
[root@ha-proxy-slave ~]# yum install -y keepalived
[root@ha-proxy-master ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak嗎v
[root@ha-proxy-master ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id director1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 80
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.246.17/24
    }
}

[root@ha-proxy-slave ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@ha-proxy-slave ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id directory2
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    nopreempt
    virtual_router_id 80
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.246.17/24
    }
}
3. 啓動KeepAlived(主備均啓動)
[root@ha-proxy-master ~]# systemctl start keepalived 
[root@ha-proxy-master ~]# systemctl enable keepalived
[root@ha-proxy-master ~]# ip a

4. 擴展對調度器Haproxy健康檢查(可選)
思路:兩臺機器都做
讓Keepalived以一定時間間隔執行一個外部腳本,腳本的功能是當Haproxy失敗,則關閉本機的Keepalived
a. script
[root@ha-proxy-master ~]# cat /etc/keepalived/check_haproxy_status.sh
#!/bin/bash
/usr/bin/curl -I http://localhost &>/dev/null   
if [ $? -ne 0 ];then                                                                     
#       /etc/init.d/keepalived stop
        systemctl stop keepalived
fi															        	
[root@ha-proxy-master ~]# chmod a+x /etc/keepalived/check_haproxy_status.sh
b. keepalived使用script
[root@ha-proxy-master keepalived]# vim keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id director1
}
vrrp_script check_haproxy {
   script "/etc/keepalived/check_haproxy_status.sh"
   interval 5
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 80
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.246.17/24
    }
    track_script {
        check_haproxy
    }
}
[root@ha-proxy-slave keepalived]# vim keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id directory2
}
vrrp_script check_haproxy {
   script "/etc/keepalived/check_haproxy_status.sh"
   interval 5
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    nopreempt
    virtual_router_id 80
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.246.17/24
    }
    track_script {
        check_haproxy
    }
}
[root@ha-proxy-master keepalived]# systemctl restart keepalived
[root@ha-proxy-slave keepalived]# systemctl restart keepalived
注:必須先啓動haproxy,再啓動keepalived
兩臺機器都配置haproxy的日誌:需要打開註釋並添加
[root@ha-proxy-master ~]# vim /etc/rsyslog.conf 
# Provides UDP syslog reception  #由於haproxy的日誌是用udp傳輸的,所以要啓用rsyslog的udp監聽
$ModLoad imudp
$UDPServerRun 514
找到  #### RULES ####   下面添加
local2.*                       /var/log/haproxy.log
[root@ha-proxy-master ~]# systemctl restart rsyslog
[root@ha-proxy-master ~]# systemctl restart haproxy
[root@ha-proxy-master ~]# tail -f /var/log/haproxy.log 
2019-07-13T23:11:35+08:00 localhost haproxy[906]: Connect from 192.168.246.1:56866 to 192.168.246.17:80 (web/HTTP)
2019-07-13T23:11:35+08:00 localhost haproxy[906]: Connect from 192.168.246.1:56867 to 192.168.246.17:80 (web/HTTP)
2019-07-13T23:13:39+08:00 localhost haproxy[906]: Connect from 192.168.246.1:56889 to 192.168.246.17:80 (stats/HTTP)
2019-07-13T23:13:39+08:00 localhost haproxy[906]: Connect from 192.168.246.1:56890 to 192.168.246.17:80 (web/HTTP)
2019-07-13T23:14:07+08:00 localhost haproxy[906]: Connect from 192.168.246.1:56895 to 192.168.246.17:80 (web/HTTP)
2019-07-13T23:14:07+08:00 localhost haproxy[906]: Connect from 192.168.246.1:56896 to 192.168.246.17:80 (stats/HTTP)

作業:Haproxy 實現四層負載

兩臺haproxy配置文件:
[root@ha-proxy-master ~]# cat /etc/haproxy/haproxy.cfg
Haproxy L4
=================================================================================
global
    log         127.0.0.1 local2
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    nbproc      1
defaults
    mode                    http
    log                     global
    option                  redispatch
    retries                 3
    maxconn                 4000
    contimeout	            5000
    clitimeout	            50000
	srvtimeout	            50000
listen stats
    bind			*:81
    stats                   	enable
    stats uri              	/haproxy
    stats auth           	qianfeng:123
frontend  web
    mode                   	http
    bind                    	    *:80
    option                  httplog
    default_backend    httpservers
backend httpservers
    balance     roundrobin
    server  http1 192.168.246.162:80 maxconn 2000 weight 1  check inter 1s rise 2 fall 2
    server  http2 192.168.246.163:80 maxconn 2000 weight 1  check inter 1s rise 2 fall 2
listen mysql
    bind *:3306
    mode tcp
    balance roundrobin
    server mysql1 192.168.246.163:3306 weight 1  check inter 1s rise 2 fall 2
    server mysql2 192.168.246.162:3306 weight 1  check inter 1s rise 2 fall 2
inter表示健康檢查的間隔,單位爲毫秒 可以用1s等,fall代表健康檢查失敗2回後放棄檢查。rise代表連續健康檢查成功2此後將認爲服務器可用。默認的,haproxy認爲服務時永遠可用的,除非加上check讓haproxy確認服務是否真的可用。
找一臺機器做爲客戶端去測試,在測試的時候注意mysql的遠程登錄權限
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章