Docker 的幾種網絡配置
1. bridge
當docker進程啓動時,會在主機上創建一個名爲docker0的虛擬網橋,此主機上的docker容器會鏈接到這個虛擬網橋上。虛擬網橋的工作方式和交換機類似,這樣主機上的所有容器就通過交換機連接在了一個二層網絡中。從docker0子網中分配一個IP給容器使用,並設置docker0的IP地址爲容器的默認網關。
在主機上創建一對虛擬網卡veth pair 設備,Docker將veth pair設備的一端放在容器中,並命名爲eth0(容器的網卡),另一端放在主機中,以vethxxx這樣的類似的名字命名,並將這個網絡設備加入到docker網橋中。可以通過brctl show
命令查看。
[root@localhost /]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.02420fde8acb no veth10e3061
bridge 是docker默認的網絡模式,我們子使用docker run 不加-net指定網絡模式是默認的就是這個模式。使⽤ docker run -d -p 80:80 nginx
時,docker 實際是在 iptables 做了 DNAT 規則,實現端⼝轉發功能。可以使⽤ iptables -t nat -vnL
查看。bridge 模式如下圖所示:
演示:
先創建一個虛擬網橋
docker network create -d bridge my-net
[root@localhost docker]# docker network create -d bridge my-net
ee2f3aa673263ba5439c2b9c4d339f6be43a5c665d06741f645c3cc8215baf15
-d 指定網絡類型,有bridge 和 overlay兩種,overlay用於Swarm mode
運行並連接到新建的網絡
docker run -it --rm --name busybox1 --network my-net busybox sh
[root@localhost docker]# docker run -it --rm --name busybox1 --network my-net busybox sh
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
d9cbbca60e5f: Pull complete
Digest: sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c
Status: Downloaded newer image for busybox:latest
/ #
打開另一個終端在運行並連接到新建網絡
docker run -it --rm --name busybox2 --network my-net busybox sh
[root@localhost ~]# docker run -it --rm --name busybox2 --network my-net busybox sh
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:12:00:03
inet addr:172.18.0.3 Bcast:172.18.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:656 (656.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.059 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.068 ms
64 bytes from 172.18.0.2: seq=2 ttl=64 time=0.064 ms
64 bytes from 172.18.0.2: seq=3 ttl=64 time=0.065 ms
64 bytes from 172.18.0.2: seq=4 ttl=64 time=0.078 ms
^C
--- 172.18.0.2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.059/0.066/0.078 ms
/ #
在第一個新建的busybox1中pingbusybox2
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:12:00:02
inet addr:172.18.0.2 Bcast:172.18.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1312 (1.2 KiB) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # ping 172.18.0.3
PING 172.18.0.3 (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.202 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.061 ms
64 bytes from 172.18.0.3: seq=2 ttl=64 time=0.315 ms
64 bytes from 172.18.0.3: seq=3 ttl=64 time=0.068 ms
^C
--- 172.18.0.3 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.061/0.161/0.315 ms
/ #
查看一個主機虛擬網橋
br-ee2f3aa67326
就是my-net
[root@localhost ~]# ifconfig
br-ee2f3aa67326: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:60ff:febf:82f2 prefixlen 64 scopeid 0x20<link>
ether 02:42:60:bf:82:f2 txqueuelen 0 (Ethernet)
RX packets 12 bytes 1008 (1008.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 28 bytes 2320 (2.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::42:fff:fede:8acb prefixlen 64 scopeid 0x20<link>
ether 02:42:0f:de:8a:cb txqueuelen 0 (Ethernet)
RX packets 173 bytes 13867 (13.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 113 bytes 9658 (9.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.110.136 netmask 255.255.255.0 broadcast 192.168.110.255
inet6 fe80::2e65:af6f:2d5d:5acc prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:33:99:ac txqueuelen 1000 (Ethernet)
RX packets 1048441 bytes 907093889 (865.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 613943 bytes 67627865 (64.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 214 bytes 20449 (19.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 214 bytes 20449 (19.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth1843878: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::d876:fcff:fea7:5228 prefixlen 64 scopeid 0x20<link>
ether da:76:fc:a7:52:28 txqueuelen 0 (Ethernet)
RX packets 12 bytes 1008 (1008.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 28 bytes 2320 (2.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth10e3061: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::54d5:32ff:fece:c43d prefixlen 64 scopeid 0x20<link>
ether 56:d5:32:ce:c4:3d txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 656 (656.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth71126a7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::5094:6fff:fe2e:5b20 prefixlen 64 scopeid 0x20<link>
ether 52:94:6f:2e:5b:20 txqueuelen 0 (Ethernet)
RX packets 12 bytes 1008 (1008.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 20 bytes 1664 (1.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@localhost ~]#
之所以是172.18而不是172.17,是因爲有一個nginx使用bridge而新建busybox的使用my-net
docker network ls
[root@localhost docker]# docker network ls
NETWORK ID NAME DRIVER SCOPE
26a939b44652 bridge bridge local
4c49d87c423a host host local
ee2f3aa67326 my-net bridge local
38981edf292a none null local
查看容器信息,這裏查看的是nginx
docker inspect gracious_roentgen
[root@localhost ~]# brctl show
bridge name bridge id STP enabled interfaces
br-ee2f3aa67326 8000.024260bf82f2 no veth1843878
veth71126a7
docker0 8000.02420fde8acb no veth10e3061
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
498013203fe7 busybox "sh" 18 minutes ago Up 18 minutes busybox2
c24a3737ca3c busybox "sh" 19 minutes ago Up 19 minutes busybox1
53450637c41a nginx:latest "nginx -g 'daemon of…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp gracious_roentgen
[root@localhost ~]# docker inspect gracious_roentgen
[
{
...
"NetworkSettings": {
...
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "26a939b44652765b61581ef9de0859c5861788c49bcf4fec4ece68215cd95073",
"EndpointID": "816bfdb505ca0bb720440d608a4c36acdb1731accf7bc1c8a98d6d52f69096ac",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
}
]
[root@localhost ~]#
刪除自建的虛擬網橋
docker network rm ee2f3aa67326
[root@localhost docker]# docker network rm ee2f3aa67326
ee2f3aa67326
2. host
如果啓動容器的時候使⽤ host 模式,那麼這個容器將不會獲得⼀個獨⽴的 Network Namespace ,⽽是和宿主機共⽤⼀個 Network Namespace。容器將不會虛擬出⾃⼰的⽹卡,配置⾃⼰的 IP 等,⽽是使⽤宿主機的 IP 和端⼝。但是,容器的其他⽅⾯,如⽂件系統、進程列表等還是和宿主機隔離的。Host模式如下圖所示:
使用主機模式運行容器,會發現和主機的網絡空間是一樣的
docker run -it --rm --name busybox1 --net=host busybox sh
[root@localhost ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@localhost ~]# docker run -it --rm --name busybox1 --net=host busybox sh
/ # ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:0F:DE:8A:CB
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:fff:fede:8acb/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:173 errors:0 dropped:0 overruns:0 frame:0
TX packets:113 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:13867 (13.5 KiB) TX bytes:9658 (9.4 KiB)
ens33 Link encap:Ethernet HWaddr 00:0C:29:33:99:AC
inet addr:192.168.110.136 Bcast:192.168.110.255 Mask:255.255.255.0
inet6 addr: fe80::2e65:af6f:2d5d:5acc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1060021 errors:0 dropped:0 overruns:0 frame:0
TX packets:630427 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:908344370 (866.2 MiB) TX bytes:69546957 (66.3 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:214 errors:0 dropped:0 overruns:0 frame:0
TX packets:214 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:20449 (19.9 KiB) TX bytes:20449 (19.9 KiB)
/ #
3. container
這個模式指定新創建的容器和已經存在的⼀個容器共享⼀個 Network Namespace,⽽不是和宿主機共享。新創建的容器不會創建⾃⼰的⽹卡,配置⾃⼰的 IP,⽽是和⼀個指定的容器共享 IP、端⼝範圍等。同樣,兩個容器除了⽹絡⽅⾯,其他的如⽂件系統、進程列表等還是隔離的。兩個容器的進程可以通過 lo ⽹卡設備通信。 Container模式示意圖:
運行一個容器busybox1
docker run -it --rm --name busybox1 busybox sh
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@localhost ~]# docker run -it --rm --name busybox1 busybox sh
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:656 (656.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ #
使用busybox1的網絡運行busybox2
docker run -it --rm --name busybox2 --net=container:busybox1 busybox sh
[root@localhost ~]# docker run -it --rm --name busybox2 --net=container:busybox1 busybox sh
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:656 (656.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ #
在未刪除容器前查看主機網絡
[root@localhost ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::42:fff:fede:8acb prefixlen 64 scopeid 0x20<link>
ether 02:42:0f:de:8a:cb txqueuelen 0 (Ethernet)
RX packets 173 bytes 13867 (13.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 113 bytes 9658 (9.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.110.136 netmask 255.255.255.0 broadcast 192.168.110.255
inet6 fe80::2e65:af6f:2d5d:5acc prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:33:99:ac txqueuelen 1000 (Ethernet)
RX packets 1063057 bytes 908801394 (866.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 633428 bytes 69913689 (66.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 214 bytes 20449 (19.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 214 bytes 20449 (19.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth0ebcc94: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::b459:3aff:fe1d:f652 prefixlen 64 scopeid 0x20<link>
ether b6:59:3a:1d:f6:52 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 656 (656.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
4. none
使⽤ none 模式,Docker 容器擁有⾃⼰的 Network Namespace,但是,並不爲Docker 容器進⾏任何⽹絡配置。也就是說,這個 Docker 容器沒有⽹卡、IP、路由等信息。需要我們⾃⼰爲 Docker 容器添加⽹卡、配置 IP 等。 None模式示意圖:
運行一個none網絡配置的容器
docker run -it --rm --name busybox --net=none busybox sh
[root@localhost ~]# docker run -it --rm --name busybox --net=none busybox sh
/ # ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ #