Consul4-linux安裝consul以及集羣搭建

          前面幾篇關於consul的文章簡單的介紹了windows下安裝consul以及consul作爲註冊中心和配置中心的簡單使用,基於前面的基礎,這裏介紹下載linux下安裝consul以及結合docker搭建consul集羣,解決consul配置的數據無法保存的問題。

目錄

 

目錄

一,下載安裝consul

https://www.consul.io/downloads.html


https://www.consul.io/downloads.html


https://www.consul.io/downloads.html

選擇linux的版本的consul進行下載

 

 

二,解壓安裝

     把下載的linux下的安裝包consul拷貝到linux環境裏面,使用unzip進行解壓:

如果linux下面沒有unzip命令,則使用yum unstall unzip命令進行安裝

 

1,解壓完成以後,把解壓後的文件拷貝到/usr/local/consul目錄下

2,配置環境變量

vi /etc/profile

配置如下:

export JAVA_HOME=/usr/local/jdk1.8.0_172
export MAVEN_HOME=/usr/local/apache-maven-3.5.4
export CONSUL_HOME=/usr/local/consul

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$MAVEN_HOME/bin:$CONSUL_HOME:$PATH

上面的CONSUL_HOME就是consul的路徑,上面的配置僅供參考。

進行了配置以後,退出保存修改,使用下面的命令使配置生效:

source /etc/profile

   這樣進行配置以後,我們就可以方便在任何地方使用consul命令了

 

三,測試安裝結果

1,查看安裝的consul版本

[root@iZbp1dmlbagds9s70r8luxZ local]# consul -v
Consul v1.2.1
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
[root@iZbp1dmlbagds9s70r8luxZ local]# 

 

2,以開發模式啓動consul

[root@iZbp1dmlbagds9s70r8luxZ local]# consul agent -dev
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.2.1'
           Node ID: '344af5b1-8914-41d6-f7b2-3143d025f493'
         Node name: 'iZbp1dmlbagds9s70r8luxZ'
        Datacenter: 'dc1' (Segment: '<all>')
            Server: true (Bootstrap: false)
       Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

    2018/07/28 09:57:02 [DEBUG] agent: Using random ID "344af5b1-8914-41d6-f7b2-3143d025f493" as node ID
    2018/07/28 09:57:02 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:344af5b1-8914-41d6-f7b2-3143d025f493 Address:127.0.0.1:8300}]
    2018/07/28 09:57:02 [INFO] serf: EventMemberJoin: iZbp1dmlbagds9s70r8luxZ.dc1 127.0.0.1
    2018/07/28 09:57:02 [INFO] serf: EventMemberJoin: iZbp1dmlbagds9s70r8luxZ 127.0.0.1
    2018/07/28 09:57:02 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
    2018/07/28 09:57:02 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "")
    2018/07/28 09:57:02 [INFO] consul: Adding LAN server iZbp1dmlbagds9s70r8luxZ (Addr: tcp/127.0.0.1:8300) (DC: dc1)
    2018/07/28 09:57:02 [INFO] consul: Handled member-join event for server "iZbp1dmlbagds9s70r8luxZ.dc1" in area "wan"
    2018/07/28 09:57:02 [DEBUG] agent/proxy: managed Connect proxy manager started
    2018/07/28 09:57:02 [WARN] agent/proxy: running as root, will not start managed proxies
    2018/07/28 09:57:02 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
    2018/07/28 09:57:02 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
    2018/07/28 09:57:02 [INFO] agent: started state syncer
    2018/07/28 09:57:02 [WARN] raft: Heartbeat timeout from "" reached, starting election
    2018/07/28 09:57:02 [INFO] raft: Node at 127.0.0.1:8300 [Candidate] entering Candidate state in term 2
    2018/07/28 09:57:02 [DEBUG] raft: Votes needed: 1
    2018/07/28 09:57:02 [DEBUG] raft: Vote granted from 344af5b1-8914-41d6-f7b2-3143d025f493 in term 2. Tally: 1
    2018/07/28 09:57:02 [INFO] raft: Election won. Tally: 1
    2018/07/28 09:57:02 [INFO] raft: Node at 127.0.0.1:8300 [Leader] entering Leader state
    2018/07/28 09:57:02 [INFO] consul: cluster leadership acquired
    2018/07/28 09:57:02 [INFO] consul: New leader elected: iZbp1dmlbagds9s70r8luxZ
    2018/07/28 09:57:02 [INFO] connect: initialized CA with provider "consul"
    2018/07/28 09:57:02 [DEBUG] consul: Skipping self join check for "iZbp1dmlbagds9s70r8luxZ" since the cluster is too small
    2018/07/28 09:57:02 [INFO] consul: member 'iZbp1dmlbagds9s70r8luxZ' joined, marking health alive
    2018/07/28 09:57:02 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
    2018/07/28 09:57:02 [INFO] agent: Synced node info
    2018/07/28 09:57:04 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
    2018/07/28 09:57:04 [DEBUG] agent: Node info in sync
    2018/07/28 09:57:04 [DEBUG] agent: Node info in sync

 

    輸出如上所示,http端口爲8500,dns端口爲8600,綁定的本地ip爲127.0.0.1,如果consul需要端口需要被外部訪問需要開發8500端口和8600端口,可以參照:

centos查詢端口是不是開放的
firewall-cmd --permanent --query-port=8500/tcp
#添加對外開放端口
firewall-cmd --permanent --add-port=8500/tcp

#重啓防火牆
firewall-cmd --reload

 

四:consul的參數知識

參考:參考鏈接

consul agent常用命令解讀
-data-dir :
作用:指定agent儲存狀態的數據目錄,這是所有agent都必須的,對server尤其重要,因爲他們必須持久化集羣的狀態

-config-dir :
作用:指定service的配置文件和檢查定義所在的位置。目錄必需爲consul.d,文件內容都是json格式的數據。配置詳解見官方

-config-file 
作用:指定一個要裝載的配置文件

-dev :
作用:開發服務器模式,雖然是server模式,但不用於生產環境,因爲不會有任何持久化操作,即不會有任何數據寫入到磁盤

-bootstrap-expect 
作用: 參數表明該服務運行時最低開始進行選舉的節點數,當設置爲1時,則意味允許節點爲一個時也進行選舉;當設置爲3時,則等到3臺節點同時運行consul並加入到server才能參與選舉,選舉完集羣才能夠正常工作。 一般建議服務器結點3-5個。

-node :
作用:指定節點在集羣中的名稱,該名稱在集羣中必須是唯一的(默認這是機器的主機名),直接採用機器的IP

-bind :
作用:指明節點的IP地址,一般是0.0.0.0或者雲服務器內網地址,不能寫阿里雲外網地址。這是Consul偵聽的地址,它必須可以被集羣中的所有其他節點訪問。雖然綁定地址不是絕對必要的,但最好提供一個。

-server :
作用:指定節點爲server,每個數據中心(DC)的server數推薦3-5個。

-client :
作用:指定節點爲client,指定客戶端接口的綁定地址,包括:HTTP、DNS、RPC 
默認是127.0.0.1,只允許迴環接口訪問

-datacenter :
作用:指定機器加入到哪一個數據中心中。老版本叫-dc,-dc已經失效

 

consul概念:

Agent: Consul集羣中長時間運行的守護進程,以consul agent 命令開始啓動. 在客戶端和服務端模式下都可以運行,可以運行DNS或者HTTP接口, 它的主要作用是運行時檢查和保持服務同步。 
Client: 客戶端, 無狀態, 以一個極小的消耗將接口請求轉發給局域網內的服務端集羣. 
Server: 服務端, 保存配置信息, 高可用集羣, 在局域網內與本地客戶端通訊, 通過廣域網與其他數據中心通訊. 每個數據中心的 server 數量推薦爲 3 個或是 5 個. 
Datacenter: 數據中心,多數據中心聯合工作保證數據存儲安全快捷 
Consensus: 一致性協議使用的是Raft Protocol 
RPC: 遠程程序通信 
Gossip: 基於 Serf 實現的 gossip 協議,負責成員、失敗探測、事件廣播等。通過 UDP 實現各個節點之間的消息。分爲 LAN 上的和 WAN 上的兩種情形。

1,參數案例

前面我們使用consul agent -dev啓動的consul在雲服務器上是不能被外部訪問的,那麼要被外部訪問我們需要加參數,參照如下:

consul agent -dev -http-port 8500 -client 0.0.0.0

參數說明:

-client 0.0.0.0:表明不是綁定的不是默認的127.0.0.1地址,可以通過公網進行訪問

-http-port 8500:通過該參數可以修改consul啓動的http端口

 

2,查看consul的集羣信息:

[root@iZbp1dmlbagds9s70r8luxZ local]# consul members
Node                     Address         Status  Type    Build  Protocol  DC   Segment
iZbp1dmlbagds9s70r8luxZ  127.0.0.1:8301  alive   server  1.2.1  2         dc1  <all>
[root@iZbp1dmlbagds9s70r8luxZ local]# 

node:節點名
Address:節點地址
Status:alive表示節點健康
Type:server運行狀態是server狀態
DC:dc1表示該節點屬於DataCenter1

 

3,以server模式啓動consul

          前面都是以-dev開發模式啓動的consul,該模式啓動的consul作爲配置中心的時候,配置數據是不能保存的,不能進行持久化,需要進行持久化,則需要以服務模式啓動

    

[root@iZbp1dmlbagds9s70r8luxZ local]# consul agent -server -ui -bootstrap-expect=1 -data-dir=/usr/local/consul/data -node=agent-one -advertise=47.98.112.71 -bind=0.0.0.0 -client=0.0.0.0
BootstrapExpect is set to 1; this is the same as Bootstrap mode.
bootstrap = true: do not enable unless necessary
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.2.1'
           Node ID: '04b82369-8b5b-19f3-ab0d-6a82266a2110'
         Node name: 'agent-one'
        Datacenter: 'dc1' (Segment: '<all>')
            Server: true (Bootstrap: true)
       Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 47.98.112.71 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

    2018/07/28 10:54:02 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:04b82369-8b5b-19f3-ab0d-6a82266a2110 Address:47.98.112.71:8300}]
    2018/07/28 10:54:02 [WARN] memberlist: Binding to public address without encryption!
    2018/07/28 10:54:02 [INFO] serf: EventMemberJoin: agent-one.dc1 47.98.112.71
    2018/07/28 10:54:02 [WARN] memberlist: Binding to public address without encryption!
    2018/07/28 10:54:02 [INFO] serf: EventMemberJoin: agent-one 47.98.112.71
    2018/07/28 10:54:02 [INFO] agent: Started DNS server 0.0.0.0:8600 (udp)
    2018/07/28 10:54:02 [INFO] raft: Node at 47.98.112.71:8300 [Follower] entering Follower state (Leader: "")
    2018/07/28 10:54:02 [INFO] consul: Adding LAN server agent-one (Addr: tcp/47.98.112.71:8300) (DC: dc1)
    2018/07/28 10:54:02 [INFO] consul: Handled member-join event for server "agent-one.dc1" in area "wan"
    2018/07/28 10:54:02 [WARN] agent/proxy: running as root, will not start managed proxies
    2018/07/28 10:54:02 [INFO] agent: Started DNS server 0.0.0.0:8600 (tcp)
    2018/07/28 10:54:02 [INFO] agent: Started HTTP server on [::]:8500 (tcp)
    2018/07/28 10:54:02 [INFO] agent: started state syncer
    2018/07/28 10:54:08 [WARN] raft: Heartbeat timeout from "" reached, starting election
    2018/07/28 10:54:08 [INFO] raft: Node at 47.98.112.71:8300 [Candidate] entering Candidate state in term 2
    2018/07/28 10:54:08 [INFO] raft: Election won. Tally: 1
    2018/07/28 10:54:08 [INFO] raft: Node at 47.98.112.71:8300 [Leader] entering Leader state
    2018/07/28 10:54:08 [INFO] consul: cluster leadership acquired
    2018/07/28 10:54:08 [INFO] consul: New leader elected: agent-one
    2018/07/28 10:54:08 [INFO] consul: member 'agent-one' joined, marking health alive
    2018/07/28 10:54:08 [INFO] agent: Synced node info
    2018/07/28 10:54:11 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:54:12 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:14 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:16 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:18 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:20 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:20 [WARN] consul: error getting server health from "agent-one": rpc error getting client: failed to get conn: dial tcp <nil>->47.98.112.71:8300: i/o timeout
    2018/07/28 10:54:23 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:54:24 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:26 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:28 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:30 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:32 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:32 [WARN] consul: error getting server health from "agent-one": rpc error getting client: failed to get conn: dial tcp <nil>->47.98.112.71:8300: i/o timeout
    2018/07/28 10:54:35 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:54:36 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:38 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:40 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:42 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:44 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:44 [WARN] consul: error getting server health from "agent-one": rpc error getting client: failed to get conn: dial tcp <nil>->47.98.112.71:8300: i/o timeout
    2018/07/28 10:54:47 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:54:48 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:50 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:52 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:54 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:56 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:56 [WARN] consul: error getting server health from "agent-one": rpc error getting client: failed to get conn: dial tcp <nil>->47.98.112.71:8300: i/o timeout
    2018/07/28 10:54:59 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:55:00 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:55:02 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:55:04 [WARN] consul: error getting server health from "agent-one": last request still outstanding
^C    2018/07/28 10:55:10 [INFO] agent: Caught signal:  interrupt
    2018/07/28 10:55:10 [INFO] agent: Graceful shutdown disabled. Exiting
    2018/07/28 10:55:10 [INFO] agent: Requesting shutdown
    2018/07/28 10:55:10 [INFO] consul: shutting down server
    2018/07/28 10:55:10 [WARN] serf: Shutdown without a Leave
    2018/07/28 10:55:10 [WARN] serf: Shutdown without a Leave
    2018/07/28 10:55:10 [INFO] manager: shutting down
    2018/07/28 10:55:10 [INFO] agent: consul server down
    2018/07/28 10:55:10 [INFO] agent: shutdown complete
    2018/07/28 10:55:10 [INFO] agent: Stopping DNS server 0.0.0.0:8600 (tcp)
    2018/07/28 10:55:10 [INFO] agent: Stopping DNS server 0.0.0.0:8600 (udp)
    2018/07/28 10:55:10 [INFO] agent: Stopping HTTP server [::]:8500 (tcp)
    2018/07/28 10:55:11 [WARN] agent: Timeout stopping HTTP server [::]:8500 (tcp)
    2018/07/28 10:55:11 [INFO] agent: Waiting for endpoints to shut down
    2018/07/28 10:55:11 [INFO] agent: Endpoints down
    2018/07/28 10:55:11 [INFO] agent: Exit code: 1
[root@iZbp1dmlbagds9s70r8luxZ local]# 
[root@iZbp1dmlbagds9s70r8luxZ local]# consul agent -server -ui -bootstrap-expect=1 -data-dir=/usr/local/consul/data -node=agent-one -advertise=47.98.112.71 -bind=0.0.0.0 -client=0.0.0.0
BootstrapExpect is set to 1; this is the same as Bootstrap mode.
bootstrap = true: do not enable unless necessary
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.2.1'
           Node ID: '04b82369-8b5b-19f3-ab0d-6a82266a2110'
         Node name: 'agent-one'
        Datacenter: 'dc1' (Segment: '<all>')
            Server: true (Bootstrap: true)
       Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 47.98.112.71 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

    2018/07/28 10:55:15 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:04b82369-8b5b-19f3-ab0d-6a82266a2110 Address:47.98.112.71:8300}]
    2018/07/28 10:55:15 [WARN] memberlist: Binding to public address without encryption!
    2018/07/28 10:55:15 [INFO] serf: EventMemberJoin: agent-one.dc1 47.98.112.71
    2018/07/28 10:55:15 [WARN] memberlist: Binding to public address without encryption!
    2018/07/28 10:55:15 [INFO] serf: EventMemberJoin: agent-one 47.98.112.71
    2018/07/28 10:55:15 [INFO] agent: Started DNS server 0.0.0.0:8600 (udp)
    2018/07/28 10:55:15 [INFO] raft: Node at 47.98.112.71:8300 [Follower] entering Follower state (Leader: "")
    2018/07/28 10:55:15 [WARN] serf: Failed to re-join any previously known node
    2018/07/28 10:55:15 [WARN] serf: Failed to re-join any previously known node
    2018/07/28 10:55:15 [INFO] consul: Adding LAN server agent-one (Addr: tcp/47.98.112.71:8300) (DC: dc1)
    2018/07/28 10:55:15 [INFO] consul: Handled member-join event for server "agent-one.dc1" in area "wan"
    2018/07/28 10:55:15 [WARN] agent/proxy: running as root, will not start managed proxies
    2018/07/28 10:55:15 [INFO] agent: Started DNS server 0.0.0.0:8600 (tcp)
    2018/07/28 10:55:15 [INFO] agent: Started HTTP server on [::]:8500 (tcp)
    2018/07/28 10:55:15 [INFO] agent: started state syncer
    2018/07/28 10:55:21 [WARN] raft: Heartbeat timeout from "" reached, starting election
    2018/07/28 10:55:21 [INFO] raft: Node at 47.98.112.71:8300 [Candidate] entering Candidate state in term 3
    2018/07/28 10:55:21 [INFO] raft: Election won. Tally: 1
    2018/07/28 10:55:21 [INFO] raft: Node at 47.98.112.71:8300 [Leader] entering Leader state
    2018/07/28 10:55:21 [INFO] consul: cluster leadership acquired
    2018/07/28 10:55:21 [INFO] consul: New leader elected: agent-one
    2018/07/28 10:55:21 [INFO] agent: Synced node info

啓動命令:

consul agent -server -ui -bootstrap-expect=1 -data-dir=/usr/local/consul/data -node=agent-one -
advertise=47.98.112.71 -bind=0.0.0.0 -client=0.0.0.0

參數說明:

-server:服務器模式 
-ui:能webui展示 
-bootstrap-expect:server爲1時即選擇server集羣leader 
-data-dir:consul狀態存儲文件地址 
-node:指定結點名 
advertise:本地ip地址 
-client:指定可訪問這個服務結點的ip 

 

上面是以服務模式啓動的輸出,注意需要開放8300端口,consul需要進行節點通信使用

在此查看成員信息:

[root@iZbp1dmlbagds9s70r8luxZ data]# consul members
Node       Address            Status  Type    Build  Protocol  DC   Segment
agent-one  47.98.112.71:8301  alive   server  1.2.1  2         dc1  <all>
[root@iZbp1dmlbagds9s70r8luxZ data]# 

 

4,加入集羣

    加入集羣的命令:consul join xx.xx.xx.xx

 

五,consul集羣搭建

    使用docker容器來搭建consul集羣,編寫Docker compose

集羣說明
1,3 server 節點(consul-server1 ~ 3)和 2 node 節點(consul-node1 ~ 2)
2,映射本地 consul/data1 ~ 3/ 目錄到 Docker 容器中,避免 Consul 集羣重啓後數據丟失。
3,Consul web http 端口分別爲 8501、8502、8503

新建:docker-compose.yml

version: '2.0'
services:
  consul-server1:
    image: consul:latest
    hostname: "consul-server1"
    ports:
      - "8501:8500"
    volumes:
      - ./consul/data1:/consul/data
    command: "agent -server -bootstrap-expect 3 -ui -disable-host-node-id -client 0.0.0.0"
  consul-server2:
    image: consul:latest
    hostname: "consul-server2"
    ports:
      - "8502:8500"
    volumes:
      - ./consul/data2:/consul/data
    command: "agent -server -ui -join consul-server1 -disable-host-node-id -client 0.0.0.0"
    depends_on: 
      - consul-server1
  consul-server3:
    image: consul:latest
    hostname: "consul-server3"
    ports:
      - "8503:8500"
    volumes:
      - ./consul/data3:/consul/data
    command: "agent -server -ui -join consul-server1 -disable-host-node-id -client 0.0.0.0"
    depends_on:
      - consul-server1
  consul-node1:
    image: consul:latest
    hostname: "consul-node1"
    command: "agent -join consul-server1 -disable-host-node-id"
    depends_on:
      - consul-server1
  consul-node2:
    image: consul:latest
    hostname: "consul-node2"
    command: "agent -join consul-server1 -disable-host-node-id"
    depends_on:
      - consul-server1

集羣啓動時默認以 consul-server1 爲 leader,然後 server2 ~ 3 和 node1 ~ 2 加入到該集羣。當 server1 出現故障下線是,server2 ~ 3 則會進行選舉選出新leader。

集羣操作
創建並啓動集羣:docker-compose up -d
停止整個集羣:docker-compose stop
啓動集羣:docker-compose start
清除整個集羣:docker-compose rm(注意:需要先停止)
訪問
http://localhost:8501
http://localhost:8502
http://localhost:8503

 

上面搭建了consul集羣了,那麼怎麼進行負載均衡了,使用ngxin:

設定負載均衡服務器列表:

upstream consul {
    server 127.0.0.1:8501;
    server 127.0.0.1:8502;
    server 127.0.0.1:8503;
}

服務配置:

server {
    listen       80;
    server_name  consul.test.com;#服務域名,需要填寫你的服務域名

    location / {
        proxy_pass  http://consul;#請求轉向consul服務器列表
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

參考:docker搭建consul集羣參考

 

    總結:

到這裏consul的相關知識基本結束了,但是這只是consul的基本使用,擴展的還需要進行consul客戶端的二次開發,比如自定義consul的服務註冊與發現,consul作爲配置中心存在配置數據丟失的情況,怎麼進行配置信息的自動備份找回。有空的話在對這一塊的擴展進行深入的學習探討。

   相關文章:

Consul1-window安裝consul

Consul2-使用consul作爲服務註冊和發現中心

Consul3-使用consul作爲配置中心

   之前自己寫了一個控制器來作爲服務健康檢查其實是沒必要的,springboot已經爲我們解決了,參考代碼:

採用springboot的健康檢查,consul健康檢查修改

歡迎加羣學習交流:331227121

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章