Docker下部署Consul集羣和ACL權限配置

Docker下部署Consul集羣和ACL權限配置

規劃與準備

本次計劃部署的consul集羣有3個節點,都是server類型

容器IP 節點 類型
172.17.0.2 server1 server
172.17.0.3 server2 server
172.17.0.4 server3 server

把consul的數據文件都映射到宿主機上,有利於備份數據以及方便以後重構容器。
宿主機建立目錄server1、server2、server3,下面分別存放3個consul節點的信息:

[root@wuli-centOS7 ~]# mkdir -p /data/consul/server1/config
[root@wuli-centOS7 ~]# mkdir -p /data/consul/server1/data
[root@wuli-centOS7 ~]# mkdir -p /data/consul/server1/log

[root@wuli-centOS7 ~]# mkdir -p /data/consul/server2/config
[root@wuli-centOS7 ~]# mkdir -p /data/consul/server2/data
[root@wuli-centOS7 ~]# mkdir -p /data/consul/server2/log

[root@wuli-centOS7 ~]# mkdir -p /data/consul/server3/config
[root@wuli-centOS7 ~]# mkdir -p /data/consul/server3/data
[root@wuli-centOS7 ~]# mkdir -p /data/consul/server3/log

搭建Consul集羣

創建server1配置文件,啓動server1節點

  1. 創建server1的配置文件:
[root@wuli-centOS7 ~]# vim /data/consul/server1/config/config.json
{
    "datacenter": "dc1",
    "bootstrap_expect": 3,
    "data_dir": "/consul/data",
    "log_file": "/consul/log/",
    "log_level": "INFO",
    "node_name": "consul_server_1",
    "client_addr": "0.0.0.0",
    "server": true,
    "ui": true,
    "enable_script_checks": true,
    "addresses": {
        "https": "0.0.0.0",
        "dns": "0.0.0.0"
    }
}
  1. 啓動節點consul_server_1
[root@wuli-centOS7 ~]# docker run -d -p 8510:8500 -v /data/consul/server1/data:/consul/data -v /data/consul/server1/config:/consul/config -e CONSUL_BIND_INTERFACE='eth0' --privileged=true --name=consul_server_1 consul agent -data-dir=/consul/data;

docker run命令說明:

  • Environment Variable(環境變量):
    CONSUL_BIND_INTERFACE=eth0:在容器啓動時,自動綁定eth0端口的IP地址

  • docker參數:
    -e:將時區信息傳入到容器內部
    -d:Daemon模式
    -p:綁定端口
    –privileged:表示以root權限運行
    –name:指定實例名稱
    consul:consul啓動命令

啓動後,因爲配置了bootstrap_expect=3,但只啓動了一個server,所以會報錯:沒有集羣領導者

[root@wuli-centOS7 ~]# docker exec consul_server_1 consul monitor
2020-05-03T18:01:40.453Z [ERROR] agent.anti_entropy: failed to sync remote state: error="No cluster leader"
2020-05-03T18:01:57.802Z [ERROR] agent: Coordinate update error: error="No cluster leader"

創建server2配置文件,啓動server2節點

  1. 創建server2的配置文件:
[root@wuli-centOS7 ~]# vim /data/consul/server2/config/config.json
{
    "datacenter": "dc1",
    "bootstrap_expect": 3,
    "data_dir": "/consul/data",
    "log_file": "/consul/log/",
    "log_level": "INFO",
    "node_name": "consul_server_2",
    "client_addr": "0.0.0.0",
    "server": true,
    "ui": true,
    "enable_script_checks": true,
    "addresses": {
        "https": "0.0.0.0",
        "dns": "0.0.0.0"
    }
}
  1. 啓動節點consul_server_2
[root@wuli-centOS7 ~]# docker run -d -p 8520:8500 -v /data/consul/server2/data:/consul/data -v /data/consul/server2/config:/consul/config -e CONSUL_BIND_INTERFACE='eth0' --privileged=true --name=consul_server_2 consul agent -data-dir=/consul/data;

創建server3配置文件,啓動server3節點

  1. 創建server3的配置文件:
[root@wuli-centOS7 ~]# vim /data/consul/server3/config/config.json
{
    "datacenter": "dc1",
    "bootstrap_expect": 3,
    "data_dir": "/consul/data",
    "log_file": "/consul/log/",
    "log_level": "INFO",
    "node_name": "consul_server_3",
    "client_addr": "0.0.0.0",
    "server": true,
    "ui": true,
    "enable_script_checks": true,
    "addresses": {
        "https": "0.0.0.0",
        "dns": "0.0.0.0"
    }
}
  1. 啓動節點consul_server_3
[root@wuli-centOS7 ~]# docker run -d -p 8530:8500 -v /data/consul/server3/data:/consul/data -v /data/consul/server3/config:/consul/config -e CONSUL_BIND_INTERFACE='eth0' --privileged=true --name=consul_server_3 consul agent -data-dir=/consul/data;

加入集羣

  1. 查看節點server2、server3的IP,然後通過join命令把全部節點加入集羣
[root@wuli-centOS7 ~]# docker inspect --format '{{ .NetworkSettings.IPAddress }}' consul_server_2
172.17.0.3
[root@wuli-centOS7 ~]# docker inspect --format '{{ .NetworkSettings.IPAddress }}' consul_server_3
172.17.0.4
[root@wuli-centOS7 ~]# docker exec consul_server_1 consul join 172.17.0.3
[root@wuli-centOS7 ~]# docker exec consul_server_1 consul join 172.17.0.4
  1. 查看後臺日誌:
[root@wuli-centOS7 ~]# docker exec consul_server_1 consul monitor
2020-05-03T19:04:39.254Z [INFO]  agent: (LAN) joining: lan_addresses=[172.17.0.3]
2020-05-03T19:04:39.260Z [INFO]  agent.server.serf.lan: serf: EventMemberJoin: consul_server_2 172.17.0.3
2020-05-03T19:04:39.261Z [INFO]  agent: (LAN) joined: number_of_nodes=1
2020-05-03T19:04:39.261Z [INFO]  agent.server: Adding LAN server: server="consul_server_2 (Addr: tcp/172.17.0.3:8300) (DC: dc1)"
2020-05-03T19:04:39.265Z [INFO]  agent.server.serf.wan: serf: EventMemberJoin: consul_server_2.dc1 172.17.0.3
2020-05-03T19:04:39.266Z [INFO]  agent.server: Handled event for server in area: event=member-join server=consul_server_2.dc1 area=wan
2020-05-03T19:04:40.362Z [ERROR] agent: Coordinate update error: error="No cluster leader"
2020-05-03T19:04:44.019Z [INFO]  agent: (LAN) joining: lan_addresses=[172.17.0.4]
2020-05-03T19:04:44.021Z [INFO]  agent.server.serf.lan: serf: EventMemberJoin: consul_server_3 172.17.0.4
2020-05-03T19:04:44.021Z [INFO]  agent: (LAN) joined: number_of_nodes=1
2020-05-03T19:04:44.022Z [INFO]  agent.server: Adding LAN server: server="consul_server_3 (Addr: tcp/172.17.0.4:8300) (DC: dc1)"
2020-05-03T19:04:44.027Z [INFO]  agent.server: Found expected number of peers, attempting bootstrap: peers=172.17.0.2:8300,172.17.0.3:8300,172.17.0.4:8300
2020-05-03T19:04:44.049Z [INFO]  agent.server.serf.wan: serf: EventMemberJoin: consul_server_3.dc1 172.17.0.4
2020-05-03T19:04:44.049Z [INFO]  agent.server: Handled event for server in area: event=member-join server=consul_server_3.dc1 area=wan
2020-05-03T19:04:48.088Z [WARN]  agent.server.raft: heartbeat timeout reached, starting election: last-leader=
2020-05-03T19:04:48.088Z [INFO]  agent.server.raft: entering candidate state: node="Node at 172.17.0.2:8300 [Candidate]" term=2
2020-05-03T19:04:48.100Z [INFO]  agent.server.raft: election won: tally=2
2020-05-03T19:04:48.101Z [INFO]  agent.server.raft: entering leader state: leader="Node at 172.17.0.2:8300 [Leader]"
2020-05-03T19:04:48.101Z [INFO]  agent.server.raft: added peer, starting replication: peer=78293668-16a6-1de0-673f-455d594e7447
2020-05-03T19:04:48.101Z [INFO]  agent.server.raft: added peer, starting replication: peer=0b6169d8-7acc-ed24-682f-56ffd12b486c
2020-05-03T19:04:48.102Z [INFO]  agent.server: cluster leadership acquired
2020-05-03T19:04:48.103Z [INFO]  agent.server: New leader elected: payload=consul_server_1
2020-05-03T19:04:48.104Z [WARN]  agent.server.raft: appendEntries rejected, sending older logs: peer="{Voter 78293668-16a6-1de0-673f-455d594e7447 172.17.0.3:8300}" next=1
2020-05-03T19:04:48.107Z [INFO]  agent.server.raft: pipelining replication: peer="{Voter 0b6169d8-7acc-ed24-682f-56ffd12b486c 172.17.0.4:8300}"
2020-05-03T19:04:48.112Z [INFO]  agent.server.raft: pipelining replication: peer="{Voter 78293668-16a6-1de0-673f-455d594e7447 172.17.0.3:8300}"
2020-05-03T19:04:48.120Z [INFO]  agent.server: Cannot upgrade to new ACLs: leaderMode=0 mode=0 found=true leader=172.17.0.2:8300
2020-05-03T19:04:48.129Z [INFO]  agent.leader: started routine: routine="CA root pruning"
2020-05-03T19:04:48.129Z [INFO]  agent.server: member joined, marking health alive: member=consul_server_1
2020-05-03T19:04:48.146Z [INFO]  agent.server: member joined, marking health alive: member=consul_server_2
2020-05-03T19:04:48.156Z [INFO]  agent.server: member joined, marking health alive: member=consul_server_3
2020-05-03T19:04:48.720Z [INFO]  agent: Synced node info

  1. 查看成員
[root@wuli-centOS7 ~]# docker exec consul_server_1 consul members
Node             Address          Status  Type    Build  Protocol  DC   Segment
consul_server_1  172.17.0.2:8301  alive   server  1.7.2  2         dc1  <all>
consul_server_2  172.17.0.3:8301  alive   server  1.7.2  2         dc1  <all>
consul_server_3  172.17.0.4:8301  alive   server  1.7.2  2         dc1  <all>
  1. 查看集羣的選舉情況,領導者爲哪個節點等信息
[root@wuli-centOS7 ~]# docker exec consul_server_1 consul operator raft list-peers    
Node             ID                                    Address          State     Voter  RaftProtocol
consul_server_1  0214e0b9-04fe-e8ad-c855-b5091dfc8e2e  172.17.0.2:8300  leader    true   3
consul_server_2  78293668-16a6-1de0-673f-455d594e7447  172.17.0.3:8300  follower  true   3
consul_server_3  0b6169d8-7acc-ed24-682f-56ffd12b486c  172.17.0.4:8300  follower  true   3
  1. 訪問ui,節點正常,標星的節點表示是leader
    在這裏插入圖片描述

在這裏插入圖片描述

驗證Consul集羣選舉機制

Consul中只有server節點會參與Raft算法並且作爲peer set中的一部分。Raft中的節點總是處於以下三種狀態之一: follower、candidate或leader。目前server1是leader,我們下面重啓consul_server_1容器,觀察consul集羣變化情況。

  1. 重啓consul_server_1
[root@wuli-centOS7 ~]# docker restart consul_server_1
  1. 觀察server2和server3的日誌,consul_server_2後臺日誌:
[root@wuli-centOS7 ~]# docker exec consul_server_2 consul monitor
2020-05-03T19:26:29.564Z [INFO]  agent.server.memberlist.lan: memberlist: Suspect consul_server_1 has failed, no acks received
2020-05-03T19:26:30.533Z [INFO]  agent.server.serf.wan: serf: EventMemberUpdate: consul_server_1.dc1
2020-05-03T19:26:30.533Z [INFO]  agent.server: Handled event for server in area: event=member-update server=consul_server_1.dc1 area=wan
2020-05-03T19:26:30.564Z [INFO]  agent.server.serf.lan: serf: EventMemberUpdate: consul_server_1
2020-05-03T19:26:30.565Z [INFO]  agent.server: Updating LAN server: server="consul_server_1 (Addr: tcp/172.17.0.2:8300) (DC: dc1)"
2020-05-03T19:26:33.542Z [WARN]  agent.server.raft: rejecting vote request since we have a leader: from=172.17.0.4:8300 leader=172.17.0.2:8300
2020-05-03T19:26:33.565Z [INFO]  agent.server: New leader elected: payload=consul_server_3
  1. consul_server_3後臺日誌:
[root@wuli-centOS7 ~]# docker exec consul_server_3 consul monitor
2020-05-03T19:26:28.151Z [ERROR] agent.server.memberlist.lan: memberlist: Push/Pull with consul_server_1 failed: dial tcp 172.17.0.2:8301: connect: connection refused
2020-05-03T19:26:30.235Z [INFO]  agent.server.memberlist.lan: memberlist: Suspect consul_server_1 has failed, no acks received
2020-05-03T19:26:30.420Z [ERROR] agent: Coordinate update error: error="rpc error making call: stream closed"
2020-05-03T19:26:30.536Z [INFO]  agent.server.serf.wan: serf: EventMemberUpdate: consul_server_1.dc1
2020-05-03T19:26:30.536Z [INFO]  agent.server: Handled event for server in area: event=member-update server=consul_server_1.dc1 area=wan
2020-05-03T19:26:30.729Z [INFO]  agent.server.serf.lan: serf: EventMemberUpdate: consul_server_1
2020-05-03T19:26:30.729Z [INFO]  agent.server: Updating LAN server: server="consul_server_1 (Addr: tcp/172.17.0.2:8300) (DC: dc1)"
2020-05-03T19:26:32.234Z [WARN]  agent.server.memberlist.wan: memberlist: Was able to connect to consul_server_1.dc1 but other probes failed, network may be misconfigured
2020-05-03T19:26:33.536Z [WARN]  agent.server.raft: heartbeat timeout reached, starting election: last-leader=172.17.0.2:8300
2020-05-03T19:26:33.536Z [INFO]  agent.server.raft: entering candidate state: node="Node at 172.17.0.4:8300 [Candidate]" term=3
2020-05-03T19:26:33.546Z [INFO]  agent.server.raft: election won: tally=2
2020-05-03T19:26:33.546Z [INFO]  agent.server.raft: entering leader state: leader="Node at 172.17.0.4:8300 [Leader]"
2020-05-03T19:26:33.546Z [INFO]  agent.server.raft: added peer, starting replication: peer=0214e0b9-04fe-e8ad-c855-b5091dfc8e2e
2020-05-03T19:26:33.546Z [INFO]  agent.server.raft: added peer, starting replication: peer=78293668-16a6-1de0-673f-455d594e7447
2020-05-03T19:26:33.547Z [INFO]  agent.server: cluster leadership acquired
2020-05-03T19:26:33.548Z [INFO]  agent.server: New leader elected: payload=consul_server_3
2020-05-03T19:26:33.550Z [INFO]  agent.server.raft: pipelining replication: peer="{Voter 78293668-16a6-1de0-673f-455d594e7447 172.17.0.3:8300}"
2020-05-03T19:26:33.551Z [INFO]  agent.server.raft: pipelining replication: peer="{Voter 0214e0b9-04fe-e8ad-c855-b5091dfc8e2e 172.17.0.2:8300}"
2020-05-03T19:26:33.554Z [INFO]  agent.server: Cannot upgrade to new ACLs: leaderMode=0 mode=0 found=true leader=172.17.0.4:8300
2020-05-03T19:26:33.555Z [INFO]  agent.leader: started routine: routine="CA root pruning"
  1. consul_server_1重啓後,查看領導者,此時consul_server_3已被推選爲領導者:
[root@wuli-centOS7 ~]# docker exec -it consul_server_1 consul operator raft list-peers
Node             ID                                    Address          State     Voter  RaftProtocol
consul_server_3  0b6169d8-7acc-ed24-682f-56ffd12b486c  172.17.0.4:8300  leader    true   3
consul_server_1  0214e0b9-04fe-e8ad-c855-b5091dfc8e2e  172.17.0.2:8300  follower  true   3
consul_server_2  78293668-16a6-1de0-673f-455d594e7447  172.17.0.3:8300  follower  true   3

配置join參數,節點自動加入集羣

在Consul集羣中,一個節點優雅退出,不會影響到集羣中其他節點的正常運行,而集羣的數據中心會把離開的節點標誌爲left狀態,等到該節點重新加入集羣,狀態會變爲alive,默認情況下,數據中心會保留離開節點信息72小時,72小時候後如果仍然沒有加入集羣,則會把該節點的信息移除掉。

server節點退出集羣

下面演示節點退出的情況:

  1. consul_server_2優雅退出
[root@wuli-centOS7 ~]# docker exec consul_server_2 consul leave
Graceful leave complete
  1. 查看成員狀態,
[root@wuli-centOS7 ~]# docker exec consul_server_1 consul members
Node             Address          Status  Type    Build  Protocol  DC   Segment
consul_server_1  172.17.0.2:8301  alive   server  1.7.2  2         dc1  <all>
consul_server_2  172.17.0.3:8301  left    server  1.7.2  2         dc1  <all>
consul_server_3  172.17.0.4:8301  alive   server  1.7.2  2         dc1  <all>
  1. 查看leader
[root@wuli-centOS7 ~]# docker exec consul_server_1 consul operator raft list-peers
Node             ID                                    Address          State     Voter  RaftProtocol
consul_server_3  0b6169d8-7acc-ed24-682f-56ffd12b486c  172.17.0.4:8300  leader    true   3
consul_server_1  0214e0b9-04fe-e8ad-c855-b5091dfc8e2e  172.17.0.2:8300  follower  true   3

查看後臺日誌,沒有報錯,訪問ui正常,說明配置文件的bootstrap_expect=3,只是在創建集羣的時候期待的節點數量,如果達不到就不會初次創建集羣,但節點數據量達到3後,集羣初次創建成功,後面如果server通過優雅退出,不影響集羣的健康情況,集羣仍然會正常運行,而優雅退出的集羣的狀態會標誌爲“left”。

  1. 重新啓動consul_server_2後,不會自動加入集羣,因爲配置文件沒有start_join和retry_join參數,需要通過命令consul join加入集羣,通過consul_server_2或者集羣中的其他server發起命令都可以:
[root@wuli-centOS7 ~]# docker exec consul_server_2 consul join 172.17.0.2
Successfully joined cluster by contacting 1 nodes.

或者:

[root@wuli-centOS7 ~]# docker exec consul_server_1 consul join 172.17.0.3
Successfully joined cluster by contacting 1 nodes.

節點自動加入集羣

全部server的配置文件添加start_join和retry_join參數,可以在重啓後自動集羣。

  1. 參看各個server的ip:
[root@wuli-centOS7 server4]# docker inspect --format '{{ .NetworkSettings.IPAddress }}' consul_server_1
172.17.0.2
[root@wuli-centOS7 server4]# docker inspect --format '{{ .NetworkSettings.IPAddress }}' consul_server_2
172.17.0.3
[root@wuli-centOS7 server4]# docker inspect --format '{{ .NetworkSettings.IPAddress }}' consul_server_3
172.17.0.4
  1. 分別編輯各個server的配置文件
[root@wuli-centOS7 ~]# vim /data/consul/server1/config/config.json
{
    "datacenter": "dc1",
    "bootstrap_expect": 3,
    "data_dir": "/consul/data",
    "log_file": "/consul/log/",
    "log_level": "INFO",
    "node_name": "consul_server_3",
    "client_addr": "0.0.0.0",
    "server": true,
    "ui": true,
    "enable_script_checks": true,
    "addresses": {
        "https": "0.0.0.0",
        "dns": "0.0.0.0"
    },
    "start_join":[
        "172.17.0.2",
        "172.17.0.3",
        "172.17.0.4"
    ],
    "retry_join":[
        "172.17.0.2",
        "172.17.0.3",
        "172.17.0.4"
    ]
}
  1. 重載配置文件,驗證配置是否正確
[root@wuli-centOS7 ~]# docker exec consul_server_1 consul reload   
Configuration reload triggered
  1. 配置正確,以後重啓consul server,或者優雅退出後再啓動,會自動加入集羣,無需使用命令join。

節點之間加入通訊密鑰

增加通訊密鑰,可以防止其他節點加入集羣。步驟如下:

  1. 使用consul keygen命令生成通訊密鑰
[root@wuli-centOS7 ~]# docker exec consul_server_1 consul keygen
zVCGFgICqf5MAU61Wd/1wDP1hoQ37rQQFVvMkhzpM1c=
  1. 把密鑰信息分別寫入3個server的配置文件中
[root@wuli-centOS7 ~]# vim /data/consul/server1/config/config.json
{   
    "datacenter": "dc1",
    "bootstrap_expect": 3,
    "data_dir": "/consul/data",
    "log_file": "/consul/log/",
    "log_level": "INFO",
    "node_name": "consul_server_1",
    "client_addr": "0.0.0.0",
    "server": true,
    "ui": true,
    "enable_script_checks": true,
    "addresses": {
        "https": "0.0.0.0",
        "dns": "0.0.0.0"
    },
    "encrypt": "zVCGFgICqf5MAU61Wd/1wDP1hoQ37rQQFVvMkhzpM1c="
}
  1. consul重新加載配置文件
[root@wuli-centOS7 ~]# docker exec consul_server_1 consul reload
Configuration reload triggered
  1. 實驗中,發現重載配置文件後,其他節點仍然可以加入集羣,需要重新啓動consul
[root@wuli-centOS7 ~]# docker restart consul_server_1
consul_server_1

[root@wuli-centOS7 ~]# docker restart consul_server_2
consul_server_2

[root@wuli-centOS7 ~]# docker restart consul_server_3
consul_server_3

後面如果有新的節點,要加入集羣中,必須提供encrypt才行。

  1. 再創建一個節點server4,嘗試加入有密鑰的集羣:
    5.1 創建配置配置文件:
[root@wuli-centOS7 ~]# vim /data/consul/server4/config/config.json
{
    "datacenter": "dc1",
    "bootstrap_expect": 1,
    "data_dir": "/consul/data",
    "log_file": "/consul/log/",
    "log_level": "INFO",
    "node_name": "consul_server_4",
    "client_addr": "0.0.0.0",
    "server": true,
    "ui": true,
    "enable_script_checks": true,
    "addresses": {
        "https": "0.0.0.0",
        "dns": "0.0.0.0"
    }
} 

5.2 啓動server4

[root@wuli-centOS7 ~]# docker run -d -p 8540:8500 -v /data/consul/server4/data:/consul/data -v /data/consul/server4/config:/consul/config -e CONSUL_BIND_INTERFACE='eth0' --privileged=true --name=consul_server_4 consul agent -data-dir=/consul/data;
cdf6edee727f92baf2f7d324cb9522644c851f81fe62356fb2fb9aad126eaf13

5.3 嘗試加入集羣,失敗:

[root@wuli-centOS7 ~]# docker exec consul_server_4 consul join 172.17.0.2         
Error joining address '172.17.0.2': Unexpected response code: 500 (1 error occurred:
        * Failed to join 172.17.0.2: Remote state is encrypted and encryption is not configured

)
Failed to join any nodes.

5.4 在配置文件中加入通訊密鑰

[root@wuli-centOS7 ~]# vim /data/consul/server4/config/config.json
{
    "datacenter": "dc1",
    "bootstrap_expect": 1,
    "data_dir": "/consul/data",
    "log_file": "/consul/log/",
    "log_level": "INFO",
    "node_name": "consul_server_4",
    "client_addr": "0.0.0.0",
    "server": true,
    "ui": true,
    "enable_script_checks": true,
    "addresses": {
        "https": "0.0.0.0",
        "dns": "0.0.0.0"
    },
    "encrypt": "zVCGFgICqf5MAU61Wd/1wDP1hoQ37rQQFVvMkhzpM1c="
} 

5.5 重載配置文件,然後嘗試加入集羣,仍然失敗,說明添加密鑰都需要重啓consul才生效

[root@wuli-centOS7 ~]# docker exec consul_server_4 consul reload
Configuration reload triggered

[root@wuli-centOS7 ~]# docker exec consul_server_4 consul join 172.17.0.2
Error joining address '172.17.0.2': Unexpected response code: 500 (1 error occurred:
        * Failed to join 172.17.0.2: Remote state is encrypted and encryption is not configured

)
Failed to join any nodes.

5.6 重啓server4,加入集羣成功!

[root@wuli-centOS7 server4]# docker restart consul_server_4
consul_server_4
[root@wuli-centOS7 server4]# docker exec consul_server_4 consul join 172.17.0.2
Successfully joined cluster by contacting 1 nodes.
[root@wuli-centOS7 server4]# docker exec consul_server_1 consul members
Node             Address          Status  Type    Build  Protocol  DC   Segment
consul_server_1  172.17.0.2:8301  alive   server  1.7.2  2         dc1  <all>
consul_server_2  172.17.0.3:8301  alive   server  1.7.2  2         dc1  <all>
consul_server_3  172.17.0.4:8301  alive   server  1.7.2  2         dc1  <all>
consul_server_4  172.17.0.5:8301  alive   server  1.7.2  2         dc1  <all>
[root@wuli-centOS7 server4]# docker exec consul_server_1 consul operator raft list-peers
Node             ID                                    Address          State     Voter  RaftProtocol
consul_server_3  0b6169d8-7acc-ed24-682f-56ffd12b486c  172.17.0.4:8300  leader    true   3
consul_server_2  78293668-16a6-1de0-673f-455d594e7447  172.17.0.3:8300  follower  true   3
consul_server_1  0214e0b9-04fe-e8ad-c855-b5091dfc8e2e  172.17.0.2:8300  follower  true   3
consul_server_4  587e81c7-f1b5-5f19-311c-741c06ca446d  172.17.0.5:8300  follower  false  3

增加ACL token權限配置

配置master的token,master的token可以自由定義,但爲了與其他token格式一致,官方建議使用64位的UUID。consul的配置文件可以有多個,文件後綴名可以是json或者hcl,我們這裏使用hcl來演示。

啓用ACL,配置master token

Consul的ACL功能需要顯示啓用,在配置文件中通過設置參數acl.enabled=true即可。
下面演示了兩種方法來配置ACL,個人推薦方法二。
方法一:配置acl.enabled=true,然後通過命令consul acl bootstrap生成token,之後把改token作爲master的token。

  1. 添加配置文件acl.hcl:
[root@wuli-centOS7 ~]# vim /data/consul/server4/config/acl.hcl
primary_datacenter = "dc1"
acl {
  enabled = true
  default_policy = "deny"
  enable_token_persistence = true
  tokens { 
  }
}
  1. 重載配置文件,創建初始token,生成的SecretID就是token
[root@wuli-centOS7 ~]# docker exec consul_server_1 consul reload 
Configuration reload triggered

[root@wuli-centOS7 ~]# consul acl bootstrap
AccessorID:       b6676320-dbef-4020-ae69-8a47ae13dcef
SecretID:         474dcea7-ee4d-3f11-1af1-a38eb37d3f5d
Description:      Bootstrap Token (Global Management)
Local:            false
Create Time:      2020-04-28 11:30:42.2992871 +0800 CST
Policies:
   00000000-0000-0000-0000-000000000001 - global-management
  1. 修改配置文件acl.hcl,加入mater token
[root@wuli-centOS7 ~]# vim /data/consul/server4/config/acl.hcl
primary_datacenter = "dc1"
acl {
  enabled = true
  default_policy = "deny"
  enable_token_persistence = true
  tokens { 
    master = "474dcea7-ee4d-3f11-1af1-a38eb37d3f5d  
  }
}
  1. 重啓服務,驗證

方法二:

  1. 使用linux的uuidgen命令生成一個64位UUID作爲master token
[root@wuli-centOS7 ~]# uuidgen
dcb93655-0661-4ea1-bfc4-e5744317f99e
  1. 編寫acl.hcl文件文件
[root@wuli-centOS7 ~]# vim /data/consul/server4/config/acl.hcl
primary_datacenter = "dc1"
acl {
  enabled = true
  default_policy = "deny"
  enable_token_persistence = true
  tokens {
    master = "dcb93655-0661-4ea1-bfc4-e5744317f99e"
  }
}

修改config.json配置,把bootstrap_expect修改成1

3.重載配置文件,驗證是否正確。

[root@wuli-centOS7 ~]# docker exec consul_server_1 consul reload 
Configuration reload triggered
  1. 優雅關閉其他server,重啓consul_server_1容器
[root@wuli-centOS7 ~]# docker exec consul_server_2 consul leave
[root@wuli-centOS7 ~]# docker exec consul_server_3 consul leave
[root@wuli-centOS7 ~]# docker exec consul_server_4 consul leave

[root@wuli-centOS7 ~]# docker restart consul_server_1
  1. 訪問ui,提示需要輸入token,輸入我們上面的mater token即可
    在這裏插入圖片描述

  2. 此時,使用consul大部分命令,都需要帶上token,否則報錯:

[root@wuli-centOS7 ~]# docker exec consul_server_1 consul info
Error querying agent: Unexpected response code: 403 (Permission denied)

帶上token參數:

[root@wuli-centOS7 ~]# docker exec consul_server_1 consul info -token=dcb93655-0661-4ea1-bfc4-e5744317f99e   
agent:
        check_monitors = 0
        check_ttls = 0
        checks = 0
        services = 0
……
  1. 不是在docker啓動的consul,可以通過增加環境變量CONSUL_HTTP_TOKEN代替每次命令後帶token參數
[root@wuli-centOS7 ~]# vi /etc/profile
export CONSUL_HTTP_TOKEN=dcb93655-0661-4ea1-bfc4-e5744317f99e

[root@wuli-centOS7 ~]# source /etc/profile

在docker運行中的consul容器,目前不清楚怎麼修改環境變量永久生效,但可以rm移除舊容器,然後在run的時候添加上環境變量,這裏就不演示了。

查看consul_server_1容器的全部環境變量

[root@wuli-centOS7 ~]# docker exec consul_server_1 env  
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=2c2c68c4058f
CONSUL_BIND_INTERFACE=eth0
CONSUL_VERSION=1.7.2
HASHICORP_RELEASES=https://releases.hashicorp.com
HOME=/root

查資料CONSUL_BIND_INTERFACE是什麼參數

配置agent token

agent token是每個集羣都需要的token
沒有配置agent token,查看日誌報以下警告:

[root@wuli-centOS7 ~]# docker exec consul_server_1 consul monitor -token=dcb93655-0661-4ea1-bfc4-e5744317f99e   

2020-05-06T02:40:09.386Z [WARN]  agent: Coordinate update blocked by ACLs: accessorID=
  1. 使用API生成全權限的token作爲agent token,可以根據實際情況分配agent token的權限
[root@wuli-centOS7 ~]# curl -X PUT \
  http://localhost:8510/v1/acl/create \
  -H 'X-Consul-Token: dcb93655-0661-4ea1-bfc4-e5744317f99e' \
  -d '{"Name": "dc1","Type": "management"}'
 
{"ID":"7f587432-3650-9073-e3f4-445a2463b11f"}
  1. 把生成的token寫入到配置文件acl.hcl中
[root@wuli-centOS7 ~]# vim /data/consul/server1/config/acl.hcl
primary_datacenter = "dc1"
acl {
  enabled = true
  default_policy = "deny"
  enable_token_persistence = true
  tokens {
    master = "dcb93655-0661-4ea1-bfc4-e5744317f99e"
    agent = "7f587432-3650-9073-e3f4-445a2463b11f"
  }
}
  1. 重載配置文件,這裏無需重啓consul或者容器
[root@wuli-centOS7 ~]# docker exec consul_server_1 consul reload -token=dcb93655-0661-4ea1-bfc4-e5744317f99e
Configuration reload triggered
  1. 查看日誌,重載配置文件後,已經不報任何警告了
[root@wuli-centOS7 ~]# docker exec consul_server_1 consul monitor -token=dcb93655-0661-4ea1-bfc4-e5744317f99e

2020-05-06T03:00:20.034Z [INFO]  agent: Caught: signal=hangup
2020-05-06T03:00:20.034Z [INFO]  agent: Reloading configuration...
  1. 在集羣中acl的配置信息是一致的,所以直接把server1的acl.hcl配置文件複製到其他server節點的配置文件夾下即可
[root@wuli-centOS7 config]# cp /data/consul/server1/config/acl.hcl /data/consul/server2/config/
[root@wuli-centOS7 config]# cp /data/consul/server1/config/acl.hcl /data/consul/server3/config/
  1. 重啓其他容器,查看集羣情況
[root@wuli-centOS7 config]# docker restart consul_server_2
consul_server_2

[root@wuli-centOS7 config]# docker restart consul_server_3
consul_server_3

[root@wuli-centOS7 config]# docker exec consul_server_1 consul members -token=dcb93655-0661-4ea1-bfc4-e5744317f99e
Node             Address          Status  Type    Build  Protocol  DC   Segment
consul_server_1  172.17.0.2:8301  alive   server  1.7.2  2         dc1  <all>
consul_server_2  172.17.0.3:8301  alive   server  1.7.2  2         dc1  <all>
consul_server_3  172.17.0.4:8301  alive   server  1.7.2  2         dc1  <all>

[root@wuli-centOS7 config]# docker exec consul_server_1 consul operator raft list-peers -token=dcb93655-0661-4ea1-bfc4-e5744317f99e                 
Node             ID                                    Address          State     Voter  RaftProtocol
consul_server_1  0214e0b9-04fe-e8ad-c855-b5091dfc8e2e  172.17.0.2:8300  leader    true   3
consul_server_2  78293668-16a6-1de0-673f-455d594e7447  172.17.0.3:8300  follower  true   3
consul_server_3  0b6169d8-7acc-ed24-682f-56ffd12b486c  172.17.0.4:8300  follower  true   3

可以看到,由於前面配置了join參數,所以節點會自動加入集羣。
7. 分別訪問各個server的ui,查看到的token信息是一致的
在這裏插入圖片描述

至此,consul集羣搭建和ACL token權限配置完成!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章