ncd-sc系统部署方案-搭建consul-HA[08]

可参考:基于Consul的数据库高可用架构

生产环境consul server部署3个或者5个:

consul nginx : 172.18.17.169

consul server:172.18.17.170,172.18.17.171,172.18.17.171 (rancher server 节点主机上都安装)

consul client:172.18.17.180,172.18.17.181,172.18.17.182 (rancher worker 节点主机有多少就要都装上,不然服务不能正常健康检查)

这是按正式生产环境来规划的,上述中,我们打算组建3个server节点的consul server cluster,另外有3个client 作为rancher worker node的容器服务注册到consul中心来,nginx主机将consul server负载出来consul的web admin ui(管理界面)。

1、下载安装

目前consul的最高版本为1.5.3,只需要把相应的release压缩包 下载到机器上解压即可。

# wget https://releases.hashicorp.com/consul/1.5.3/consul_1.5.3_linux_amd64.zip

# unzip consul_1.5.3_linux_amd64.zip

假设都解压到~/consul/bin目录下,解压后会得到1个名为consul的可执行文件

为了方便,可以将其复制到/usr/local/bin下(本步骤可选,需要root权限)

# cp ./consul /usr/local/bin


可以删除也可不删除下载跟解压的文件
# rm -rf consul consul_1.5.3_linux_amd64.zip

然后检查下是否安装成功:

# consul version

Consul v1.5.3

Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)

如果出现版本输出,就表示可以了(每台机器上都重复上述操作,全安装好)

全部机器都创建目录,分别是放配置文件,以及存放数据的。

mkdir /etc/consul.d/ -p && mkdir /data/consul/ -p
mkdir /data/consul/shell -p

2、创建consul server的配置文件

把相关配置参数写入配置文件,其实也可以不用写,直接跟在命令后面就行,那样不方便管理。

consul server(172.18.17.170,172.18.17.171,172.18.17.171)配置文件(具体参数的意思请查询官网或者文章给的参考链接):

#第一台
[root@ncd-sc-rancher-server1 ~]# vi /etc/consul.d/server.json
[root@ncd-sc-rancher-server1 ~]# cat /etc/consul.d/server.json
{
  "data_dir": "/data/consul",
  "datacenter": "dc1",
  "log_level": "INFO",
  "server": true,
  "bootstrap_expect": 3,
  "bind_addr": "172.18.17.170",
  "client_addr": "0.0.0.0"
}

#第二台
[root@ncd-sc-rancher-server2 ~]# vi /etc/consul.d/server.json
[root@ncd-sc-rancher-server2 ~]# cat /etc/consul.d/server.json
{
  "data_dir": "/data/consul",
  "datacenter": "dc1",
  "log_level": "INFO",
  "server": true,
  "bootstrap_expect": 3,
  "bind_addr": "172.18.17.171",
  "client_addr": "0.0.0.0"
}

#第三台
[root@ncd-sc-rancher-server3 ~]# vi /etc/consul.d/server.json
[root@ncd-sc-rancher-server3 ~]# cat /etc/consul.d/server.json
{
  "data_dir": "/data/consul",
  "datacenter": "dc1",
  "log_level": "INFO",
  "server": true,
  "bootstrap_expect": 3,
  "bind_addr": "172.18.17.172",
  "client_addr": "0.0.0.0"
}

3台服务器的配置文件差异不大,有区别的就是bind_addr地方,自行修改为你自己服务器的ip。有多块网卡,所以必须指定,否则可以绑定0.0.0.0。

3、启动consul server服务

下面启动3台consul server,3台consul server启动命令是一样的。然后查看其中一台server的日志:

nohup consul agent -config-dir=/etc/consul.d/server.json > /data/consul/consul.log &

由于bootstrap-expect=3, 3台server都启动后需要将另外2台作为"追随者",把第一台作为领导者.

第二台171的执行 consul join -http-addr=172.18.17.171:8500 172.18.17.170

第三台172的执行 consul join -http-addr=172.18.17.172:8500 172.18.17.170

    提示:Successfully joined cluster by contacting 1 nodes. 则添加成功.在任意server node 点查看集群状态命令

[root@ncd-sc-rancher-server2 ~]# consul operator raft list-peers
Node                    ID                                    Address             State     Voter  RaftProtocol
ncd-sc-rancher-server3  11ae4f4e-91e6-3245-4e03-d4d172dea05f  172.18.17.172:8300  follower  true   3
ncd-sc-rancher-server1  6d478f61-d103-7ebc-b5cb-ce3d40785d3f  172.18.17.170:8300  leader    true   3
ncd-sc-rancher-server2  92a0ba12-116c-b12c-4e7c-d530a62427c7  172.18.17.171:8300  follower  true   3
[root@ncd-sc-rancher-server1 ~]# consul members
Node                    Address             Status  Type    Build  Protocol  DC   Segment
ncd-sc-rancher-server1  172.18.17.170:8301  alive   server  1.5.3  2         dc1  <all>
ncd-sc-rancher-server2  172.18.17.171:8301  alive   server  1.5.3  2         dc1  <all>
ncd-sc-rancher-server3  172.18.17.172:8301  alive   server  1.5.3  2         dc1  <all>
[root@ncd-sc-rancher-server1 ~]# 

4、启动consul client服务

consul client(172.18.17.180,172.18.17.181,172.18.17.182)配置文件:

#第一台
[root@ncd-sc-rancher-node1 ~]# vi /etc/consul.d/client.json
[root@ncd-sc-rancher-node1 ~]# cat /etc/consul.d/client.json
{
  "data_dir": "/data/consul",
  "enable_script_checks": true,
  "bind_addr": "172.18.17.180",
  "client_addr": "0.0.0.0",
  "ui":true,
  "retry_join": ["172.18.17.170"],
  "retry_join": ["172.18.17.171"],
  "retry_join": ["172.18.17.172"],
  "retry_interval": "30s",
  "rejoin_after_leave": true,
  "start_join": ["172.18.17.170"],
  "start_join": ["172.18.17.171"],
  "start_join": ["172.18.17.172"]
}

#第二台
[root@ncd-sc-rancher-node2 ~]# vi /etc/consul.d/client.json
[root@ncd-sc-rancher-node2 ~]# cat /etc/consul.d/client.json
{
  "data_dir": "/data/consul",
  "enable_script_checks": true,
  "bind_addr": "172.18.17.181",
  "client_addr": "0.0.0.0",
  "ui":true,
  "retry_join": ["172.18.17.170"],
  "retry_join": ["172.18.17.171"],
  "retry_join": ["172.18.17.172"],
  "retry_interval": "30s",
  "rejoin_after_leave": true,
  "start_join": ["172.18.17.170"],
  "start_join": ["172.18.17.171"],
  "start_join": ["172.18.17.172"]
}

#第三台
[root@ncd-sc-rancher-node3 ~]# vi /etc/consul.d/client.json
[root@ncd-sc-rancher-node3 ~]# cat /etc/consul.d/client.json
{
  "data_dir": "/data/consul",
  "enable_script_checks": true,
  "bind_addr": "172.18.17.182",
  "client_addr": "0.0.0.0",
  "ui":true,
  "retry_join": ["172.18.17.170"],
  "retry_join": ["172.18.17.171"],
  "retry_join": ["172.18.17.172"],
  "retry_interval": "30s",
  "rejoin_after_leave": true,
  "start_join": ["172.18.17.170"],
  "start_join": ["172.18.17.171"],
  "start_join": ["172.18.17.172"]
}

3台服务器的配置文件差异不大,有区别的就是bind_addr地方,自行修改为你自己服务器的ip。有多块网卡,所以必须指定,否则可以绑定0.0.0.0。

-bootstrap-expect=3 表示server集群最低节点数为3,低于这个值将工作不正常(注:类似zookeeper一样,通常集群数为奇数,方便选举,consul采用的是raft算法)

-data-dir 表示指定数据的存放目录(该目录必须存在)

下面启动3台consul client,3台consul client启动命令是一样的。然后查看其中一台client的日志:

nohup consul agent -config-dir=/etc/consul.d/client.json > /data/consul/consul.log &

继续执行命令看一下集群(在client 节点):

[root@ncd-sc-rancher-node1 ~]# consul members
Node                    Address             Status  Type    Build  Protocol  DC   Segment
ncd-sc-rancher-server1  172.18.17.170:8301  alive   server  1.5.3  2         dc1  <all>
ncd-sc-rancher-server2  172.18.17.171:8301  alive   server  1.5.3  2         dc1  <all>
ncd-sc-rancher-server3  172.18.17.172:8301  alive   server  1.5.3  2         dc1  <all>
ncd-sc-rancher-node1    172.18.17.180:8301  alive   client  1.5.3  2         dc1  <default>
ncd-sc-rancher-node2    172.18.17.181:8301  alive   client  1.5.3  2         dc1  <default>
ncd-sc-rancher-node3    172.18.17.182:8301  alive   client  1.5.3  2         dc1  <default>

[root@ncd-sc-rancher-node1 ~]# consul operator raft list-peers
Node                    ID                                    Address             State   Voter  RaftProtocol
ncd-sc-rancher-server2  92a0ba12-116c-b12c-4e7c-d530a62427c7  172.18.17.171:8300  leader  true   3
[root@ncd-sc-rancher-node1 ~]# 

5、配置nginx主机负载到consul server服务提供UI出来

我们看看web ui,consul自带的ui,非常轻便。

nginx的主机执行:

[root@ncd-sc-nginx conf.d]# cat consul.ncd.ltd.conf 
upstream ups_consul {
    least_conn;
    server 172.18.17.180:8500 max_fails=3 fail_timeout=5s;
    server 172.18.17.181:8500 max_fails=3 fail_timeout=5s;
    server 172.18.17.182:8500 max_fails=3 fail_timeout=5s;
}

server {
    listen 80;
    server_name consul.ncd.ltd;

    access_log  /var/log/nginx/consul.ncd.ltd.access.log  main;
		
    location / {
        proxy_pass       http://ups_consul;
        proxy_set_header Host      $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	    proxy_connect_timeout   120;            
        proxy_send_timeout      120;     
       	proxy_read_timeout      120;
    }
    
}

可以打开http://consul.ncd.ltd 则成功了.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章