consul與跨主機Docker通信

#Consul

介紹

Consul包含多個組件,但是作爲一個整體,爲你的基礎設施提供服務發現和服務配置的工具.他提供以下關鍵特性:

  1. 服務發現。Consul的客戶端可用提供一個服務,比如 api 或者mysql ,另外一些客戶端可用使用Consul去發現一個指定服務的提供者。通過DNS或者HTTP應用程序可用很容易的找到他所依賴的服務。
  2. 健康檢查。Consul客戶端可用提供任意數量的健康檢查,指定一個服務(比如:webserver是否返回了200 OK 狀態碼)或者使用本地節點(比如:內存使用是否大於90%).。這個信息可由operator用來監視集羣的健康。被服務發現組件用來避免將流量發送到不健康的主機。
  3. Key/Value存儲。應用程序可用根據自己的需要使用Consul的層級的Key/Value存儲。比如動態配置,功能標記,協調,領袖選舉等等,簡單的HTTP API讓他更易於使用。
  4. 多數據中心。Consul支持開箱即用的多數據中心。這意味着用戶不需要擔心需要建立額外的抽象層讓業務擴展到多個區域。

基礎架構

Consul是一個分佈式高可用的系統,這節將包含一些基礎,我們忽略掉一些細節這樣你可以快速瞭解Consul是如何工作的,如果要了解更多細節,請參考深入的架構描述。

每個提供服務給Consul的階段都運行了一個Consul agent 。發現服務或者設置和獲取 key/value存儲的數據不是必須運行agent,這個agent是負責對節點自身和節點上的服務進行健康檢查的。

Agent與一個和多個Consul Server 進行交互。Consul Server 用於存放和複製數據。server自行選舉一個領袖。雖然Consul可以運行在一臺server ,但是建議使用3到5臺來避免失敗情況下數據的丟失。每個數據中心建議配置一個server集羣。

你的基礎設施中需要發現其他服務的組件可以查詢任何一個Consul 的server或者 agent,Agent會自動轉發請求到server。

每個數據中運行了一個Consul server集羣。當一個跨數據中心的服務發現和配置請求創建時,本地Consul Server轉發請求到遠程的數據中心並返回結果。

安裝Consul

在官方網站找到適合你係統的安裝包下載,Consul打包爲一個zip文件。

官方下載網址:https://www.consul.io/downloads.html

下載後解開壓縮包,拷貝Consul到你的PATH路徑中。在Unix系統中~/bin/usr/local/bin是通常的安裝目錄。根據你是想爲單個用戶安裝還是給整個系統安裝來選擇。在Windows系統中有可以安裝到%PATH%的路徑中。

驗證安裝

完成安裝後,通過打開一個新終端窗口檢查consul安裝是否成功。通過執行 consul你應該看到類似下面的輸出:

[root@dhcp-10-201-102-248 ~]# consul
usage: consul [--version] [--help] <command> [<args>]
Available commands are:
    agent          Runs a Consul agent
    configtest     Validate config file
    event          Fire a new event
    exec           Executes a command on Consul nodes
    force-leave    Forces a member of the cluster to enter the "left" state
    info           Provides debugging information for operators
    join           Tell Consul agent to join cluster
    keygen         Generates a new encryption key
    keyring        Manages gossip layer encryption keys
    kv             Interact with the key-value store
    leave          Gracefully leaves the Consul cluster and shuts down
    lock           Execute a command holding a lock
    maint          Controls node or service maintenance mode
    members        Lists the members of a Consul cluster
    monitor        Stream logs from a Consul agent
    operator       Provides cluster-level tools for Consul operators
    reload         Triggers the agent to reload configuration files
    rtt            Estimates network round trip time between nodes
    snapshot       Saves, restores and inspects snapshots of Consul server state
    version        Prints the Consul version
    watch          Watch for changes in Consul

如果你得到一個consul not be found的錯誤,你的PATH可能沒有正確設置,請返回檢查你的consul的安裝路徑是否包含在PATH中。

運行Agent

完成Consul的安裝後,必須運行agent。agent可以運行爲serverclient模式。每個數據中心至少必須擁有一臺server 。建議在一個集羣中有3或者5個server,部署單一的server,在出現失敗時會不可避免的造成數據丟失。

其他的agent運行爲client模式。一個client是一個非常輕量級的進程,用於註冊服務,運行健康檢查和轉發對server的查詢。agent必須在集羣中的每個主機上運行。

啓動Consul Server

啓動Consul Server的典型命令如下:

consul agent -server -bootstrap-expect 3 -data-dir /tmp/consul -node=s1 -bind=10.201.102.198 -ui-dir ./consul_ui/ -rejoin -config-dir=/etc/consul.d/ -client 0.0.0.0
  1. -server : 定義agent運行在server模式;
  2. -bootstrap-expect :在一個datacenter中期望提供的server節點數目,當該值提供的時候,consul一直等到達到指定sever數目的時候纔會引導整個集羣,該標記不能和bootstrap共用;
  3. -bind:該地址用來在集羣內部的通訊,集羣內的所有節點到地址都必須是可達的,默認是0.0.0.0;
  4. -node:節點在集羣中的名稱,在一個集羣中必須是唯一的,默認是該節點的主機名;
  5. -ui-dir: 提供存放web ui資源的路徑,該目錄必須是可讀的;
  6. -rejoin:使consul忽略先前的離開,在再次啓動後仍舊嘗試加入集羣中;
  7. -config-dir:配置文件目錄,裏面所有以.json結尾的文件都會被加載;
  8. -client:consul服務偵聽地址,這個地址提供HTTP、DNS、RPC等服務,默認是127.0.0.1所以不對外提供服務,如果你要對外提供服務改成0.0.0.0;
[root@dhcp-10-201-102-198 consul]# consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -node=s1 -bind=10.201.102.198 -ui-dir ./consul_ui/ -rejoin -config-dir=/etc/consul.d/ -client 0.0.0.0
==> WARNING: Expect Mode enabled, expecting 3 servers
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Consul agent running!
           Version: 'v0.7.4'
           Node ID: '422ec677-74ef-8f29-2f22-01effeed6334'
         Node name: 's1'
        Datacenter: 'dc1'
            Server: true (bootstrap: false)
       Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
      Cluster Addr: 10.201.102.198 (LAN: 8301, WAN: 8302)
    Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
             Atlas: <disabled>
==> Log data will now stream in as it occurs:
    2017/03/17 18:03:08 [INFO] raft: Restored from snapshot 139-352267-1489707086023
    2017/03/17 18:03:08 [INFO] raft: Initial configuration (index=6982): [{Suffrage:Voter ID:10.201.102.199:8300 Address:10.201.102.199:8300} {Suffrage:Voter ID:10.201.102.200:8300 Address:10.201.102.200:8300} {Suffrage:Voter ID:10.201.102.198:8300 Address:10.201.102.198:8300}]
    2017/03/17 18:03:08 [INFO] raft: Node at 10.201.102.198:8300 [Follower] entering Follower state (Leader: "")
    2017/03/17 18:03:08 [INFO] serf: EventMemberJoin: s1 10.201.102.198
    2017/03/17 18:03:08 [INFO] serf: Attempting re-join to previously known node: s2: 10.201.102.199:8301
    2017/03/17 18:03:08 [INFO] consul: Adding LAN server s1 (Addr: tcp/10.201.102.198:8300) (DC: dc1)
    2017/03/17 18:03:08 [INFO] consul: Raft data found, disabling bootstrap mode
    2017/03/17 18:03:08 [INFO] serf: EventMemberJoin: s2 10.201.102.199
    2017/03/17 18:03:08 [INFO] serf: EventMemberJoin: s3 10.201.102.200
    2017/03/17 18:03:08 [INFO] serf: Re-joined to previously known node: s2: 10.201.102.199:8301
    2017/03/17 18:03:08 [INFO] consul: Adding LAN server s2 (Addr: tcp/10.201.102.199:8300) (DC: dc1)
    2017/03/17 18:03:08 [INFO] consul: Adding LAN server s3 (Addr: tcp/10.201.102.200:8300) (DC: dc1)
    2017/03/17 18:03:08 [INFO] serf: EventMemberJoin: s1.dc1 10.201.102.198
    2017/03/17 18:03:08 [INFO] consul: Adding WAN server s1.dc1 (Addr: tcp/10.201.102.198:8300) (DC: dc1)
    2017/03/17 18:03:08 [WARN] serf: Failed to re-join any previously known node
    2017/03/17 18:03:14 [INFO] agent: Synced service 'consul'
    2017/03/17 18:03:14 [INFO] agent: Deregistered service 'consul01'
    2017/03/17 18:03:14 [INFO] agent: Deregistered service 'consul02'
    2017/03/17 18:03:14 [INFO] agent: Deregistered service 'consul03'

新開一個終端窗口運行consul members,你可以看到Consul集羣的成員:

[root@dhcp-10-201-102-198 ~]# consul members
Node  Address              Status  Type    Build  Protocol  DC
s1    10.201.102.198:8301  alive   server  0.7.4  2         dc1
s2    10.201.102.199:8301  alive   server  0.7.4  2         dc1
s3    10.201.102.200:8301  alive   server  0.7.4  2         dc1

啓動Consul Client

啓動Consul Client的典型命令如下:

consul agent -data-dir /tmp/consul -node=c1 -bind=10.201.102.248 -config-dir=/etc/consul.d/ -join 10.201.102.198

運行cosnul agent以client模式,-join 加入到已有的集羣中去。

[root@dhcp-10-201-102-248 ~]# consul agent -data-dir /tmp/consul -node=c1 -bind=10.201.102.248 -config-dir=/etc/consul.d/ -join 10.201.102.198
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Joining cluster...
    Join completed. Synced with 1 initial agents
==> Consul agent running!
           Version: 'v0.7.4'
           Node ID: '564dc0c7-7f4f-7402-a301-cebe7f024294'
         Node name: 'c1'
        Datacenter: 'dc1'
            Server: false (bootstrap: false)
       Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
      Cluster Addr: 10.201.102.248 (LAN: 8301, WAN: 8302)
    Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
             Atlas: <disabled>
==> Log data will now stream in as it occurs:
    2017/03/17 15:35:16 [INFO] serf: EventMemberJoin: c1 10.201.102.248
    2017/03/17 15:35:16 [INFO] agent: (LAN) joining: [10.201.102.198]
    2017/03/17 15:35:16 [INFO] serf: EventMemberJoin: s2 10.201.102.199
    2017/03/17 15:35:16 [INFO] serf: EventMemberJoin: s3 10.201.102.200
    2017/03/17 15:35:16 [INFO] serf: EventMemberJoin: s1 10.201.102.198
    2017/03/17 15:35:16 [INFO] agent: (LAN) joined: 1 Err: <nil>
    2017/03/17 15:35:16 [INFO] consul: adding server s2 (Addr: tcp/10.201.102.199:8300) (DC: dc1)
    2017/03/17 15:35:16 [INFO] consul: adding server s3 (Addr: tcp/10.201.102.200:8300) (DC: dc1)
    2017/03/17 15:35:16 [INFO] consul: adding server s1 (Addr: tcp/10.201.102.198:8300) (DC: dc1)
    2017/03/17 15:35:16 [INFO] agent: Synced node info

跨主機Docker間通信

準備環境

準備物理機或者虛擬機dev-11,IP地址爲162.105.75.113,在主機上運行docker容器host1。

準備物理機或者虛擬機dev-12,IP地址爲162.105.75.220,在主機上運行docker容器host2。

安裝配置consul

直接從官網下載,並解壓,將二進制文件拷貝到 /usr/local/bin 目錄下即安裝完成,同時創建新文件夾 /opt/consul 用於存放consul運行時產生的文件。

在dev-11機器上執行以下命令,將dev-11作爲server節點:

consul agent -server -bootstrap -data-dir /opt/consul -bind=162.105.75.113

在dev-12機器上執行以下命令,將dev-11作爲client節點,並加入集羣:

consul agent -data-dir /opt/consul -bind=162.105.75.220 -join 162.105.75.113

分別在dev-11和dev-12上執行consul members,查看集羣中是否有兩個主機:

[root@dev-12 skc]# consul members
Node    Address              Status  Type    Build  Protocol  DC
dev-11  162.105.75.113:8301  alive   server  0.7.5  2         dc1
dev-12  162.105.75.220:8301  alive   client  0.7.5  2         dc1

如果在搭建過程中出現500錯誤,無法建成集羣,可查看防火牆是否已經關閉。

配置Docker啓動參數

需配置docker的啓動參數:

#修改配置文件 /lib/systemd/system/docker.service
[root@jamza_vm_master_litepaas registry_test]# cat /lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker

# cluster-store 的主機指定爲localhost即可
# cluster-advertise的ip可以指定爲本機的網卡名
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --insecure-registry=172.18.0.3:5000 --cluster-store=consul://127.0.0.1:8500 --cluster-advertise=eth0:2376
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
[root@jamza_vm_master_litepaas registry_test]#

#重啓Docker服務
[root@jamza_vm_master_litepaas registry_test]# systemctl daemon-reload
[root@jamza_vm_master_litepaas registry_test]# service docker restart

創建overlay網絡

在dev-11上執行docker network create -d overlay multihost 創建overlay類型網絡multihost,然後查看創建結果:

[root@dev-11 ~]# docker network create -d overlay multihost
[root@dev-11 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
914e62484c33        bridge              bridge              local
018d41df39c5        docker_gwbridge     bridge              local
0edff5347b33        host                host                local
e7b16dd58248        multihost           overlay             global
1d25e019c111        none                null                local

此時,在dev-12機器上:

[root@dev-12 skc]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
7af47cbb82c8        bridge              bridge              local
30911dfed7f2        docker_gwbridge     bridge              local
6e6deb4077c4        host                host                local
e7b16dd58248        multihost           overlay             global
dc7f861e601a        none                null                local

說明overlay網絡被同步過去,在dev-12上可以看到在dev-11上創建的multihost網絡。

創建容器並測試

在dev-11上創建容器:

[root@dev-11 skc]# docker run -it --name=host1 --net=multihost debugman007/ubt14-ssh:v1 bash

在dev-12上創建容器:

[root@dev-12 skc]# docker run -it --name=host2 --net=multihost debugman007/ubt14-ssh:v1 bash

在host1中:

root@d19636118ead:/# ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0a:00:00:02  
          inet addr:10.0.0.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::42:aff:fe00:2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:24 errors:0 dropped:0 overruns:0 frame:0
          TX packets:25 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1904 (1.9 KB)  TX bytes:2122 (2.1 KB)
 
eth1      Link encap:Ethernet  HWaddr 02:42:ac:11:00:02  
          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1018 (1.0 KB)  TX bytes:868 (868.0 B)
 
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:970 (970.0 B)  TX bytes:970 (970.0 B)

在host2中:

root@7bd8ff1ab133:/# ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0a:00:00:03  
          inet addr:10.0.0.3  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::42:aff:fe00:3/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:25 errors:0 dropped:0 overruns:0 frame:0
          TX packets:23 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1966 (1.9 KB)  TX bytes:1850 (1.8 KB)
 
eth1      Link encap:Ethernet  HWaddr 02:42:ac:11:00:02  
          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:22 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2412 (2.4 KB)  TX bytes:648 (648.0 B)
 
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:934 (934.0 B)  TX bytes:934 (934.0 B)

此時,在host1中ping host2:

root@d19636118ead:/# ping host1
PING host1 (10.0.0.2) 56(84) bytes of data.
64 bytes from d19636118ead (10.0.0.2): icmp_seq=1 ttl=64 time=0.047 ms
64 bytes from d19636118ead (10.0.0.2): icmp_seq=2 ttl=64 time=0.057 ms
^C
--- host1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.047/0.052/0.057/0.005 ms
root@d19636118ead:/# ping host2
PING host2 (10.0.0.3) 56(84) bytes of data.
64 bytes from host2.multihost (10.0.0.3): icmp_seq=1 ttl=64 time=0.917 ms
64 bytes from host2.multihost (10.0.0.3): icmp_seq=2 ttl=64 time=0.975 ms
64 bytes from host2.multihost (10.0.0.3): icmp_seq=3 ttl=64 time=0.935 ms
^C
--- host2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.917/0.942/0.975/0.034 ms

能夠ping通,說明跨主機的容器搭建完成。

發佈了68 篇原創文章 · 獲贊 4 · 訪問量 9146
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章