ncd-sc系統部署方案-搭建consul-HA[08]

可參考:基於Consul的數據庫高可用架構

生產環境consul server部署3個或者5個:

consul nginx : 172.18.17.169

consul server:172.18.17.170,172.18.17.171,172.18.17.171 (rancher server 節點主機上都安裝)

consul client:172.18.17.180,172.18.17.181,172.18.17.182 (rancher worker 節點主機有多少就要都裝上,不然服務不能正常健康檢查)

這是按正式生產環境來規劃的,上述中,我們打算組建3個server節點的consul server cluster,另外有3個client 作爲rancher worker node的容器服務註冊到consul中心來,nginx主機將consul server負載出來consul的web admin ui(管理界面)。

1、下載安裝

目前consul的最高版本爲1.5.3,只需要把相應的release壓縮包 下載到機器上解壓即可。

# wget https://releases.hashicorp.com/consul/1.5.3/consul_1.5.3_linux_amd64.zip

# unzip consul_1.5.3_linux_amd64.zip

假設都解壓到~/consul/bin目錄下,解壓後會得到1個名爲consul的可執行文件

爲了方便,可以將其複製到/usr/local/bin下(本步驟可選,需要root權限)

# cp ./consul /usr/local/bin


可以刪除也可不刪除下載跟解壓的文件
# rm -rf consul consul_1.5.3_linux_amd64.zip

然後檢查下是否安裝成功:

# consul version

Consul v1.5.3

Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)

如果出現版本輸出,就表示可以了(每臺機器上都重複上述操作,全安裝好)

全部機器都創建目錄,分別是放配置文件,以及存放數據的。

mkdir /etc/consul.d/ -p && mkdir /data/consul/ -p
mkdir /data/consul/shell -p

2、創建consul server的配置文件

把相關配置參數寫入配置文件,其實也可以不用寫,直接跟在命令後面就行,那樣不方便管理。

consul server(172.18.17.170,172.18.17.171,172.18.17.171)配置文件(具體參數的意思請查詢官網或者文章給的參考鏈接):

#第一臺
[root@ncd-sc-rancher-server1 ~]# vi /etc/consul.d/server.json
[root@ncd-sc-rancher-server1 ~]# cat /etc/consul.d/server.json
{
  "data_dir": "/data/consul",
  "datacenter": "dc1",
  "log_level": "INFO",
  "server": true,
  "bootstrap_expect": 3,
  "bind_addr": "172.18.17.170",
  "client_addr": "0.0.0.0"
}

#第二臺
[root@ncd-sc-rancher-server2 ~]# vi /etc/consul.d/server.json
[root@ncd-sc-rancher-server2 ~]# cat /etc/consul.d/server.json
{
  "data_dir": "/data/consul",
  "datacenter": "dc1",
  "log_level": "INFO",
  "server": true,
  "bootstrap_expect": 3,
  "bind_addr": "172.18.17.171",
  "client_addr": "0.0.0.0"
}

#第三臺
[root@ncd-sc-rancher-server3 ~]# vi /etc/consul.d/server.json
[root@ncd-sc-rancher-server3 ~]# cat /etc/consul.d/server.json
{
  "data_dir": "/data/consul",
  "datacenter": "dc1",
  "log_level": "INFO",
  "server": true,
  "bootstrap_expect": 3,
  "bind_addr": "172.18.17.172",
  "client_addr": "0.0.0.0"
}

3臺服務器的配置文件差異不大,有區別的就是bind_addr地方,自行修改爲你自己服務器的ip。有多塊網卡,所以必須指定,否則可以綁定0.0.0.0。

3、啓動consul server服務

下面啓動3臺consul server,3臺consul server啓動命令是一樣的。然後查看其中一臺server的日誌:

nohup consul agent -config-dir=/etc/consul.d/server.json > /data/consul/consul.log &

由於bootstrap-expect=3, 3臺server都啓動後需要將另外2臺作爲"追隨者",把第一臺作爲領導者.

第二臺171的執行 consul join -http-addr=172.18.17.171:8500 172.18.17.170

第三臺172的執行 consul join -http-addr=172.18.17.172:8500 172.18.17.170

    提示:Successfully joined cluster by contacting 1 nodes. 則添加成功.在任意server node 點查看集羣狀態命令

[root@ncd-sc-rancher-server2 ~]# consul operator raft list-peers
Node                    ID                                    Address             State     Voter  RaftProtocol
ncd-sc-rancher-server3  11ae4f4e-91e6-3245-4e03-d4d172dea05f  172.18.17.172:8300  follower  true   3
ncd-sc-rancher-server1  6d478f61-d103-7ebc-b5cb-ce3d40785d3f  172.18.17.170:8300  leader    true   3
ncd-sc-rancher-server2  92a0ba12-116c-b12c-4e7c-d530a62427c7  172.18.17.171:8300  follower  true   3
[root@ncd-sc-rancher-server1 ~]# consul members
Node                    Address             Status  Type    Build  Protocol  DC   Segment
ncd-sc-rancher-server1  172.18.17.170:8301  alive   server  1.5.3  2         dc1  <all>
ncd-sc-rancher-server2  172.18.17.171:8301  alive   server  1.5.3  2         dc1  <all>
ncd-sc-rancher-server3  172.18.17.172:8301  alive   server  1.5.3  2         dc1  <all>
[root@ncd-sc-rancher-server1 ~]# 

4、啓動consul client服務

consul client(172.18.17.180,172.18.17.181,172.18.17.182)配置文件:

#第一臺
[root@ncd-sc-rancher-node1 ~]# vi /etc/consul.d/client.json
[root@ncd-sc-rancher-node1 ~]# cat /etc/consul.d/client.json
{
  "data_dir": "/data/consul",
  "enable_script_checks": true,
  "bind_addr": "172.18.17.180",
  "client_addr": "0.0.0.0",
  "ui":true,
  "retry_join": ["172.18.17.170"],
  "retry_join": ["172.18.17.171"],
  "retry_join": ["172.18.17.172"],
  "retry_interval": "30s",
  "rejoin_after_leave": true,
  "start_join": ["172.18.17.170"],
  "start_join": ["172.18.17.171"],
  "start_join": ["172.18.17.172"]
}

#第二臺
[root@ncd-sc-rancher-node2 ~]# vi /etc/consul.d/client.json
[root@ncd-sc-rancher-node2 ~]# cat /etc/consul.d/client.json
{
  "data_dir": "/data/consul",
  "enable_script_checks": true,
  "bind_addr": "172.18.17.181",
  "client_addr": "0.0.0.0",
  "ui":true,
  "retry_join": ["172.18.17.170"],
  "retry_join": ["172.18.17.171"],
  "retry_join": ["172.18.17.172"],
  "retry_interval": "30s",
  "rejoin_after_leave": true,
  "start_join": ["172.18.17.170"],
  "start_join": ["172.18.17.171"],
  "start_join": ["172.18.17.172"]
}

#第三臺
[root@ncd-sc-rancher-node3 ~]# vi /etc/consul.d/client.json
[root@ncd-sc-rancher-node3 ~]# cat /etc/consul.d/client.json
{
  "data_dir": "/data/consul",
  "enable_script_checks": true,
  "bind_addr": "172.18.17.182",
  "client_addr": "0.0.0.0",
  "ui":true,
  "retry_join": ["172.18.17.170"],
  "retry_join": ["172.18.17.171"],
  "retry_join": ["172.18.17.172"],
  "retry_interval": "30s",
  "rejoin_after_leave": true,
  "start_join": ["172.18.17.170"],
  "start_join": ["172.18.17.171"],
  "start_join": ["172.18.17.172"]
}

3臺服務器的配置文件差異不大,有區別的就是bind_addr地方,自行修改爲你自己服務器的ip。有多塊網卡,所以必須指定,否則可以綁定0.0.0.0。

-bootstrap-expect=3 表示server集羣最低節點數爲3,低於這個值將工作不正常(注:類似zookeeper一樣,通常集羣數爲奇數,方便選舉,consul採用的是raft算法)

-data-dir 表示指定數據的存放目錄(該目錄必須存在)

下面啓動3臺consul client,3臺consul client啓動命令是一樣的。然後查看其中一臺client的日誌:

nohup consul agent -config-dir=/etc/consul.d/client.json > /data/consul/consul.log &

繼續執行命令看一下集羣(在client 節點):

[root@ncd-sc-rancher-node1 ~]# consul members
Node                    Address             Status  Type    Build  Protocol  DC   Segment
ncd-sc-rancher-server1  172.18.17.170:8301  alive   server  1.5.3  2         dc1  <all>
ncd-sc-rancher-server2  172.18.17.171:8301  alive   server  1.5.3  2         dc1  <all>
ncd-sc-rancher-server3  172.18.17.172:8301  alive   server  1.5.3  2         dc1  <all>
ncd-sc-rancher-node1    172.18.17.180:8301  alive   client  1.5.3  2         dc1  <default>
ncd-sc-rancher-node2    172.18.17.181:8301  alive   client  1.5.3  2         dc1  <default>
ncd-sc-rancher-node3    172.18.17.182:8301  alive   client  1.5.3  2         dc1  <default>

[root@ncd-sc-rancher-node1 ~]# consul operator raft list-peers
Node                    ID                                    Address             State   Voter  RaftProtocol
ncd-sc-rancher-server2  92a0ba12-116c-b12c-4e7c-d530a62427c7  172.18.17.171:8300  leader  true   3
[root@ncd-sc-rancher-node1 ~]# 

5、配置nginx主機負載到consul server服務提供UI出來

我們看看web ui,consul自帶的ui,非常輕便。

nginx的主機執行:

[root@ncd-sc-nginx conf.d]# cat consul.ncd.ltd.conf 
upstream ups_consul {
    least_conn;
    server 172.18.17.180:8500 max_fails=3 fail_timeout=5s;
    server 172.18.17.181:8500 max_fails=3 fail_timeout=5s;
    server 172.18.17.182:8500 max_fails=3 fail_timeout=5s;
}

server {
    listen 80;
    server_name consul.ncd.ltd;

    access_log  /var/log/nginx/consul.ncd.ltd.access.log  main;
		
    location / {
        proxy_pass       http://ups_consul;
        proxy_set_header Host      $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	    proxy_connect_timeout   120;            
        proxy_send_timeout      120;     
       	proxy_read_timeout      120;
    }
    
}

可以打開http://consul.ncd.ltd 則成功了.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章