作爲目前最火熱的容器技術,docker在網絡實現和管理方面還存在不足。最開始的docker是依賴於Linux Bridge實現的網絡設置,只能在容器裏創建一個網卡。後來隨着docker/libnetwork項目的進展,docker的網絡管理功能逐漸多了起來。儘管如此,跨主機通信對於docker來說還是一個需要面對的問題,這一點對於kubernetes類的容器管理系統異常重要。目前市面上主流的解決方法有flannel, weave, Pipework, Open vSwitch等。
Open vSwitch實現比較簡單,成熟且功能強大,所以很適合作爲解決docker底層網絡互聯互通的工具。言歸正傳,如下是基本架構圖:
具體的實現步驟如下:
1. 安裝docker-engine, bridge-utils和openvswitch
# yum -y install docker-engine bridge-utils openvswitch
2. 在主機1上編輯/usr/lib/systemd/system/docker.service文件,添加docker啓動選項"--bip=10.0.0.1/24",如下:
[Service]
Type=notify
ExecStart=/usr/bin/docker daemon -H fd:// --bip=10.0.0.1/24
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
3. 在主機2上編輯/usr/lib/systemd/system/docker.service文件,添加docker啓動選項"--bip=10.0.1.1/24",如下:
[Service]
Type=notify
ExecStart=/usr/bin/docker daemon -H fd:// --bip=10.0.1.1/24
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
4. 在兩個主機上啓動docker服務
# systemctl start docker
# systemctl enable docker
5. docker服務啓動後,可以看到一個新的bridge docker0被創建出來,並且被賦予了我們之前配置的IP地址:
[root@minion1 ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.02429e9a9aae no br-tun
[root@minion1 ~]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.0.0.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::42:9eff:fe9a:9aae prefixlen 64 scopeid 0x20<link>
ether 02:42:9e:9a:9a:ae txqueuelen 0 (Ethernet)
RX packets 34 bytes 2199 (2.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 18 bytes 1438 (1.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@minion2 ~]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.0.1.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::42:d1ff:fe0a:33d3 prefixlen 64 scopeid 0x20<link>
ether 02:42:d1:0a:33:d3 txqueuelen 0 (Ethernet)
RX packets 35 bytes 2363 (2.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 25 bytes 2041 (1.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
6. 在兩個主機上啓動openvswitch
# systemctl start openvswitch
# systemctl enable openvswitch
7. 在兩個主機上創建隧道網橋br-tun,並通過VXLAN協議創建隧道
[root@minion1 ~]# ovs-vsctl add-br br-tun
[root@minion1 ~]# ovs-vsctl add-port br-tun vxlan0 -- set Interface vxlan0 type=vxlan options:remote_ip=192.168.100.33
[root@minion2 ~]# ovs-vsctl add-br br-tun
[root@minion2 ~]# ovs-vsctl add-port br-tun vxlan0 -- set Interface vxlan0 type=vxlan options:remote_ip=192.168.100.32
8. 將br-tun作爲接口加入docker0網橋
[root@minion1 ~]# brctl addif docker0 br-tun
[root@minion2 ~]# brctl addif docker0 br-tun
9. 由於兩個主機的容器處於不同的網段,需要添加路由才能讓兩邊的容器互相通信
[root@minion1 ~]# ip route add 10.0.1.0/24 via 192.168.100.33 dev eth0
[root@minion2 ~]# ip route add 10.0.0.0/24 via 192.168.100.32 dev eth0
10. 在兩個主機上各自啓動一個docker容器,驗證互相能否通信。
[root@minion1 ~]# docker run -i -t cirros /bin/sh
/ # ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:0A:00:00:02
inet addr:10.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:aff:fe00:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:5 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:418 (418.0 B) TX bytes:508 (508.0 B)
[root@minion2 ~]# docker run -i -t cirros /bin/sh
/ # ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:0A:00:01:02
inet addr:10.0.1.2 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:aff:fe00:102/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:508 (508.0 B) TX bytes:508 (508.0 B)
/ # ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=62 time=0.750 ms
64 bytes from 10.0.0.2: seq=1 ttl=62 time=1.106 ms
64 bytes from 10.0.0.2: seq=2 ttl=62 time=0.528 ms