LVS-DR负载均衡及keepalived高可用的部署

LVS-DR负载均衡

server1:作为LinuxDirector

1.安装ipvsadm(rh6.5需要配置yum源)
[root@server1 ~]# cat /etc/yum.repos.d/rhel-source.repo

[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.254.79/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[HighAvailability]
name=HighAvailability
baseurl=http://172.25.254.79/rhel6.5/HighAvailability
enabled=1
gpgcheck=0

[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.254.79/rhel6.5/LoadBalancer
enabled=1
gpgcheck=0

[ResilientStorage]
name=ResilientStorage
baseurl=http://172.25.254.79/rhel6.5/ResilientStorage
enabled=1
gpgcheck=0

[ScalableFileSystem]
name=ScalableFileSystem
baseurl=http://172.25.254.79/rhel6.5/ScalableFileSystem
enabled=1
gpgcheck=0
添加ipvsadm策略,将real-serverLinuxDIrector通过VIP绑定
[root@server1 ~]# ipvsadm -A -t 172.25.254.100:80 -s rr       ##添加VIP策略
[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.254.241:80 -g  ##与172.25.254.241绑定
[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.254.1:80 -g
[root@server1 ~]# ipvsadm -l                                  ##list ipvsadm table
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.254.100:http rr
  -> server3:http                 Route   1      0          0         
  -> server2:http                 Route   1      0          0      
[root@server1 ~]# /etc/init.d/ipvsadm save                    #保存策略

[root@server1 ~]# ip addr add 172.25.254.100/24 dev eth0      #给网卡添加real_server ip
[root@server1 ~]# ip addr | grep 100/24
    inet 172.25.254.100/24 scope global secondary eth0

server2:(real-server)

[root@server2 ~]# /etc/init.d/arptables_jf start       ##使客户机不能直接通过100端口访问
[root@server2 ~]# service httpd start
Redirecting to /bin/systemctl start  httpd.service
[root@server2 ~]# echo "server2" > /var/www/html/index.html
[root@server2 ~]# curl localhost
server2
[root@server2 ~]# ip addr add 172.25.254.100/24 dev eth0       ##通过VIP对Director识别

server3:(real-server)

[root@server3 ~]# /etc/init.d/arptables_jf start  
[root@server3 ~]# echo "server3" > /var/www/html/index.html
[root@server3 ~]# curl localhost
server3
[root@server3 ~]# ip addr add 172.25.254.100/24 dev eth0

foundation(客户机):

lvs调度器将数据帧中的mac地址动态地更改server2或server3的mac地址,将修改后的数据帧发送到局域网上,server2/3发现目标的VIP是我这个设备上的,就根据路由表,直接相应客户端。

##轮询测试
[root@79 ~]# curl 172.25.254.100
server2
[root@79 ~]# curl 172.25.254.100
server3
[root@79 ~]# curl 172.25.254.100
server2
[root@79 ~]# curl 172.25.254.100
server3
[root@79 lvs]# arp -d 172.25.254.100   ##清理100的arp缓存
[root@79 lvs]# arp -e 172.25.254.100   ##查看ip源mac地址
Address                  HWtype  HWaddress           Flags Mask            Iface
172.25.254.100           ether   52:54:00:95:05:b1   C                     br0
[root@79 lvs]# ip addr              ##mac地址与server1相同
18: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN qlen 500
    link/ether fe:54:00:95:05:b1 brd ff:ff:ff:ff:ff:ff

二、Ldirectord 实现lvs健康检查

ldirectord用来实现LVS负载均衡资源在主、备节点间的故障转移。在首次启动时,ldirectord可以自动创建IPVS表。此外,它还可以监控各RealServer的运行状态,一旦发现某RealServer运行异常时,还可以将其从IPVS表中移除。

ldirectord 进程通过向RealServer的RIP发送资源访问请求并通过由RealServer返回的响应信息来确定RealServer的运行状态。在 Director上,每一个VIP需要一个单独的ldirectord进程。如果RealServer不能正常响应Director上 ldirectord的请求,ldirectord进程将通过ipvsadm命令将此RealServer从IPVS表中移除。而一旦 RealServer再次上线,ldirectord会将其重新添加至IPVS表中

[root@server1 ~]# yum install -y ldirectord-3.9.5-3.1.x86_64.rpm 
[root@server1 ~]# cd /etc/ha.d/
[root@server1 ha.d]# ls
resource.d  shellfuncs
[root@server1 ha.d]# cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf  /etc/ha.d/
[root@server1 ha.d]# vim ldirectord.cf

virtual=172.25.27.100:80            # 此项用来定义LVS服务及其使用的VIP和PORT
        real=172.25.254.1:80 gate   # 定义RealServer,语法:real=RIP:port gate|masq|ipip [weight]
        real=172.25.254.241:80 gate
        fallback=127.0.0.1:80 gate  # 定义RealServer,语法:real=RIP:port gate|masq|ipip [weight]
       service=http                # 定义基于什么服务来测试RealServer;
        scheduler=rr                # 调度算法为rr
        #persistent=600             # 持久连接超时时间;
       #netmask=255.255.255.255
        protocol=tcp                # 定义此虚拟服务用到的协议;
        checktype=negotiate         # ldirectord进程用于监控RealServer的方法;{negotiate|connect|A number|off}
        checkport=80                # 指健康检查使用的端口;
       request="index.html"        # 检查RealServer用到的页面
        receive="Test Page"         # 检查RealServer用到的页面内容
        virtualhost=www.x.y.z


[root@server1 ha.d]# /etc/init.d/ldirectord start
当把server2的apache关掉后
[root@server1 ~]# ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.254.100:http rr
  -> server3:http                 Route   1      0          0    

foundation:

[root@79 ~]# curl 172.25.254.100
server3
[root@79 ~]# curl 172.25.254.100
server3

提供当后端RS全部宕掉后,返回的fallback页面,为本机httpd服务:

这里写图片描述

三、keepalived高可用

Keepalived是基于vrrp协议的一款高可用软件。Keepalived有一台主服务器和多台备份服务器,在主服务器和备份服务器上面部署相同的服务配置,使用一个虚拟IP地址对外提供服务,当主服务器出现故障时,虚拟IP地址会自动漂移到备份服务器。

1.源码安装Keepalived
[root@server1 ~]# /etc/init.d/ldirectord stop  #由于ldirectord 与Keepalived冲突,必须将ldirectord停掉
[root@server1 ~]# /etc/init.d/ldirectord stop
Stopping ldirectord... Success
##由于ldirectord 开机自启,所以必须将其开机自启关掉
[root@server1 ~]# chkconfig ldirectord off
[root@server1 ~]# tar zxf keepalived-1.3.6.tar.gz 
[root@server1 ~]# cd keepalived-1.3.6
[root@server1 ~]# ./configure --prefix=/usr/local/keepalived --with-init=SYSV
[root@server1 ~]# make
[root@server1 ~]# make install
2.Keepalived配置
root@server1 etc]# cd /usr/local/keepalived/etc/rc.d/init.d/
[root@server1 init.d]# ls
Keepalived
##由于keepalived所识别的环境与当前系统不符,所以需要制作软链接
[root@server1 init.d]# ln -s /usr/local/keepalived/etc/keepalived/ /etc/             
[root@server1 init.d]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server1 init.d]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@server1 init.d]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
[root@server1 keepalived]# cd /usr/local/keepalived/etc/rc.d/init.d/
[root@server1 init.d]# chmod +x keepalived   #加可执行权限
[root@server1 keepalived]# cd /etc/keepalived
[root@server1 keepalived]# vim /etc/keepalived/keepalived.conf

global_defs {

   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }

   notification_email_from Alexandre.Cassen@firewall.lo
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
  #vrrp_strict     
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
              172.25.254.100  
}
}

virtual_server 172.25.27.100 80 {     ##虚拟主机

    delay_loop 6
    lb_algo rr
    lb_kind DR               ##是使用的DR模型
   # persistence_timeout 50  ##先将此注释掉,可以更加直观的感受到两台rs使用的DR进行的轮转
    protocol TCP
    real_server 172.25.27.2 80 { 
                             ##real_server主机地址和端口两台rs均是使用的80端口
        weight 1             ##权重是可以自己进行修改的,在实际使用中,权重使用的不一样是因为,权重较重一方的服务器性能更加好
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
       }
    }
    real_server 172.25.254.141 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

foundation:

[root@79 ~]# curl 172.25.254.100
server2
[root@79 ~]# curl 172.25.254.100
server3
[root@79 ~]# curl 172.25.254.100
server2
[root@79 ~]# curl 172.25.254.100
server3
3.keepalived的bacup配置

打开server4作为从服务器,当主服务器故障时,从服务器会替代它的工作,并把故障的服务器剔除。

将server4按照前面步骤布置keeplived ,可直接将server1的keeplived 配置文件复制过来,进行稍作修改

[root@server4 keepalived]# vim keepalived.conf   #修改state 和 priority

vrrp_instance VI_1 {
    state BACKUP  #将状态由原来的master 改为备用
    interface eth0  
    virtual_router_id 51
    priority 50    #更改优先级
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
[root@server4 keepalived]# /etc/init.d/keepalived start  
[root@server4 ~]# ip addr add 172.25.254.100/24 dev eth0      #添加real_server ip

server1:

[root@server1~]#echo c > /proc/sysrq-trigger  #故意让系统崩溃,实现宕机的目的

foundation:

检测仍能正常工作

[root@79 ~]# curl 172.25.254.100
server2
[root@79 ~]# curl 172.25.254.100
server3
[root@79 ~]# curl 172.25.254.100
server2
[root@79 ~]# curl 172.25.254.100
server3
测试并查看IP 和mac 地址,发现此时服务已经完全有server4接管,从而实现了lvs集群高可用

原理:
LVS三种负载均衡方式(VS/NAT、VS/TUN、VS/DR)

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章