Lvs--DR模式+keepalived实现高可用

一.背景及lvs简介

   背景:服务器需要提供大量并发访问服务,因此对大负载的服务器来讲,CPU,I/O处理能力很快会成为瓶颈,由於单台服务器的性能总是有限的,简单的提高硬件性能并不能真正解决这个问题,引入多服务器和负载均衡技术(多台服务器组成一个虚拟服务器)满足大量并发访问,它的特点是提供了一个负载能力易于扩展,而价格低廉的解决方案。

   组成

  1. 调度
  2. 真实服务器

   LVS原理:

DR原理:

 

  1. 首先,用户向负载均衡器调度器(Director Server)发起请求,负载均衡器将请求发往至内核空间,交给内核模块进行检测。
  2. 内核模块中的PREROUTING链首先会收到用户请求,判断目标地址是否是负载均衡器的IP地址,如果是,则将数据包发往INPUT链。
  3. IPVS模块是工作在INPUT链上的,当用户请求到达INPUT链上时,IPVS会将用户请求和自己已定义好的集群服务作对比,如果用户请求的就是定义的集群服务,那么IPVS会强行修改数据包里的目标IP地址和目标端口,并将新的数据包发POSTROUTING链。
  4. POSTROUTING链接收到数据包发现目标IP地址刚好是自己的后端的服务器,那么通过选路,将数据包最终发送给后端的服务器。

二.LVS-DR的配置

实验环境rhel6.5

实验主机

主机ip

主机名

作用

172.25.254.1

Server1

Director Server

172.25.254.2

Server2

Real  Server

172.25.254.3

Server3

Real  Server

172.25.254.61

foundation61

Test   server

准备工作

在server2和server3上安装http服务,编辑访问页面,本地解析,为了检测方便

1.在虚拟服务器server1上配置yum源,可以安装更多软件
[root@server1 ~]# vim /etc/yum.repos.d/rhel-source.repo

[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.254.61/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.254.61/rhel6.5/LoadBalancer
gpgcheck=0

[HighAvailability]
name=HighAvailability
baseurl=http://172.25.254.61/rhel6.5/HighAvailability
gpgcheck=0

2.server1上安装软件ipvsadm是管理集群服务的命令行工具,用于管理添加LVS策略:

 yum install -y ipvsadm

3.查看本地解析

vim /etc/hosts
172.25.254.1  server1 www.westos.org westos.orgg bbs.westos.org www.linux.org

添加一个虚拟ip :172.25.254.100(通过ip addr 查看网卡信息)

[root@server1 ~]# ip addr add 172.25.254.100/24 dev eth0

添加一个虚拟设备,-A 添加一台虚拟设备 -t tcp server-ip -s 采用的算法:rr算法

[root@server1 ~]# ipvsadm -A -t 172.25.254.100:80 -s rr

添加后端实际服务器,这里添加的是server2和server3

[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.254.2:80 -g
[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.254.3:80 -g
[root@server1 ~]# ipvsadm -ln    查看策略是否添加成功
[root@server1 ~]# ipvsadm -lnc

4.server2 server3打开httpd,添加虚拟ip

[root@server2 html]# /etc/init.d/httpd  restart
[root@server2 html]# ip addr add 172.25.254.100/32 dev eth0
[root@server2 html]# yum install -y arptables_jf   安装控制访问软件

当100这个ip访问时,丢弃所有网内请求

[root@server2 html]# arptables -A IN -d 172.25.254.100 -j DROP

当自身内网访问100这个ip时,伪装为server2的实际ip

[root@server2 html]# arptables -A OUT -s 172.25.254.100 -j mangle --mangle-ip-s 172.25.254.2

保存添加的策略

[root@server2 html]# /etc/init.d/arptables_jf save

server3和server2做同样的操作

[root@server3 ~]# /etc/init.d/httpd start
[root@server3 ~]# ip addr add 172.25.254.100/32 dev eth0
[root@server3 ~]# yum install -y arptables_jf
[root@server3 ~]#  arptables -A IN -d 172.25.254.100 -j DROP
[root@server3 ~]# arptables -A OUT -s 172.25.254.100 -j mangle --mangle-ip-s 172.25.254.3
[root@server3 ~]# /etc/init.d/arptables_jf save

测试:
如果测试机不能到达100,而server又可以访问是因为server2,server3没有添加vip(ip addr 100)

[root@foundation61 rhel6.5]# curl 172.25.254.100
curl: (7) Failed connect to 172.25.254.100:80; No route to host

因为直接访问的是100ip没有通过server1,则不会出现轮询,安装arptables_jf

[root@foundation61 ~]# curl 172.25.254.100
<h1>bbs.westos.org-server3</h1>
[root@foundation61 ~]# curl 172.25.254.100
<h1>bbs.westos.org-server3</h1>

清空缓存-d实现轮询

三.DR下的健康检查

  我们手动停止server2的http服务,但客户端并不知道,仍会访问server2,因此引入LVS的健康检查,安装检测后端服务器工作状态,如果集群中的服务器出现错误,会自动检测,访问正确的服务器,假如服务器都有问题了就直接返回一个错误信息。


1.下载安装包ldirectord-3.9.5-3.1.x86_64.rpm,完成后端安全检查:

[root@server1 pub]# yum install ldirectord-3.9.5-3.1.x86_64.rpm -y
[root@server1 pub]# rpm -ql ldirectord
/etc/ha.d
/etc/ha.d/resource.d
/etc/ha.d/resource.d/ldirectord
/etc/init.d/ldirectord
/etc/logrotate.d/ldirectord
/usr/lib/ocf/resource.d/heartbeat/ldirectord
/usr/sbin/ldirectord
/usr/share/doc/ldirectord-3.9.5
/usr/share/doc/ldirectord-3.9.5/COPYING
/usr/share/doc/ldirectord-3.9.5/ldirectord.cf
/usr/share/man/man8/ldirectord.8.gz

2.查找这个配置文件拷贝到正确位置

[root@server1 pub]# cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf /etc/ha.d
[root@server1 pub]# cd /etc/ha.d

3.配置文件的修改

[root@server1 ha.d]# vim ldirectord.cf
virtual=172.25.254.100:80      #访问的虚拟ip
        real=172.25.254.2:80 gate   #后端真实服务器ip
        real=172.25.254.3:80 gate    #后端真实服务器ip
        fallback=127.0.0.1:80 gate   # 如果后端真实服务器全部宕机,本机接管服务
        service=http
        scheduler=rr          #轮询算法
        #persistent=600
        #netmask=255.255.255.255
        protocol=tcp
        checktype=negotiate
        checkport=80
        request="index.html"        #receive="Test Page"
        #virtualhost=www.x.y.z

4.打开健康检查服务

[root@server1 ha.d]# /etc/init.d/ldirectord start
Starting ldirectord... success

清空之前的后端服务器的策略
测试:前几次访问是正确的轮询,我手动停止server2的http服务,则健康检查生效一直访问server3上的,

四.高可用(HA+HB)

1.虚拟服务器安装keepalived并解压安装

[root@server1 pub]# tar zxf keepalived-2.0.6.tar.gz
[root@server1 pub]# cd keepalived-2.0.6
[root@server1 keepalived-2.0.6]# yum install -y openssl-devel.x86_64  gcc  安装keepalived的依赖包
[root@server1 keepalived-2.0.6]# ./configure --prefix=/usr/local/keepalived --with-init=SYSV

2.编译源码包

[root@server1 keepalived-2.0.6]# make
[root@server1 keepalived-2.0.6]# make install

3.制作软连接方便keepalived的管理于设置,并给加本执行权限

[root@server1 init.d]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
[root@server1 sysconfig]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server1 etc]# ln -s /usr/local/keepalived/etc/keepalived/ /etc/
[root@server1 keepalived]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@server1 ~]#chomd +x /usr/local/keepalived/etc/rc.d/init.d/keepalived

4.将server1中编译号的源码包发送到server4中,在server4中同样制作软连接

[root@server1 local]# scp -r keepalived/ [email protected]:/usr/local
server4:
[root@server4 keepalived]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
[root@server4 keepalived]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server4 keepalived]#  ln -s /usr/local/keepalived/etc/keepalived/ /etc/
[root@server4 keepalived]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@server4 keepalived]# ll /etc/init.d/keepalived
lrwxrwxrwx 1 root root 48 Sep 23 14:24 /etc/init.d/keepalived -> /usr/local/keepalived/etc/rc.d/init.d/keepalived
[root@server4 keepalived]# ll /usr/local/keepalived/etc/keepalived/
total 8
-rw-r--r-- 1 root root 3550 Sep 23 14:23 keepalived.conf
drwxr-xr-x 2 root root 4096 Sep 23 14:23 samples
[root@server4 keepalived]# ll /usr/local/keepalived/etc/rc.d/init.d/keepalived
-rwxr-xr-x 1 root root 1308 Sep 23 14:23 /usr/local/keepalived/etc/rc.d/init.d/keepalived

问题:访问请求超时,重启了虚拟机后虚拟ip失效
[root@foundation61 lvs]# curl 172.25.254.100
先检查网通ping,
[root@foundation61 lvs]# arp -an |grep 100
? (172.25.254.100) at <incomplete> on br0
? (172.25.254.100) at 52:54:00:3b:b0:e6 [ether] on br0
mac地址查看
解决:
[root@server2 ~]# ip addr add 172.25.254.100/32 dev eth0
再次访问:
[root@foundation254 lvs]# curl 172.25.254.100
<h1>bbs.westos.org-server3</h1>
[root@foundation254 lvs]# curl 172.25.254.100
<h> www.westos.org-server2</h>
主备控制1和4互为主备

[root@server2 ~]# yum install vsftpd -y
[root@server2 ~]# /etc/init.d/vsftpd start
[root@server2 ftp]# touch server2
[root@server2 ftp]# ip addr add 172.25.254.200/32 dev eth0
[root@server2 ftp]# vim /etc/sysconfig/arptables
[0:0] -A IN -d 172.25.254.100 -j DROP
[0:0] -A IN -d 172.25.254.200 -j DROP
[0:0] -A OUT -s 172.25.254.100 -j mangle --mangle-ip-s 172.25.254.2
[0:0] -A OUT -s 172.25.254.200 -j mangle --mangle-ip-s 172.25.254.2
[root@server2 ftp]# /etc/init.d/arptables_jf restart
[root@server2 ftp]# arptables -L


server3同做上面步骤
server1:
配置文件 vim   /usr/local/keepalived/etc/keepalived/keepalived.conf

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 254
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100
        172.25.254.200
    }
}

virtual_server 172.25.254.200 21 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
persistence_timeout 50
    protocol TCP

    real_server 172.25.254.2 21 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
    real_server 172.25.254.3 21 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
}

1,4互为主从

server1:
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 254   #唯一
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100
    }
}
ivrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 1254
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.200
    }
}


scp 到server4

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 254
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.100
    }
}
vrrp_instance VI_2 {
    state MASTER
    interface eth0
    virtual_router_id 1254
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.254.200
    }
}

测试:
 

[root@foundation61 lvs]# lftp 172.25.254.200
lftp 172.25.254.200:~> ls
Interrupt                                    
lftp 172.25.254.200:~> lftp 172.25.254.200
lftp 172.25.254.200:~> exit
[root@foundation61 lvs]# lftp 172.25.254.200
lftp 172.25.254.200:~> ls
drwxr-xr-x    2 0        0            4096 Feb 12  2013 pub
-rw-r--r--    1 0        0               0 Sep 23 08:13 server2
lftp 172.25.254.200:/>
lftp 172.25.254.200:/> exit
[root@foundation61 lvs]# lftp 172.25.254.200
lftp 172.25.254.200:~> ls              
drwxr-xr-x    2 0        0            4096 Feb 12  2013 pub
-rw-r--r--    1 0        0               0 Sep 23 08:12 server3
lftp 172.25.254.200:/>
lftp 172.25.254.200:/> exit

具体原理可参考:http://www.mamicode.com/info-detail-1488579.html

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章