Docker 网络模式(四种)详细介绍
docker 网络模式
本文首先介绍了docker自身的4种网络工作方式,
docker作为目前最火的轻量级容器技术,有很多令人称道的功能,如docker的镜像管理。然而,docker同样有着很多不完善的地方,网络方面就是docker比较薄弱的部分。因此,我们有必要深入了解docker的网络知识,以满足更高的网络需求。
四种网络模式
我们在使用docker run创建docker容器时,可以用--net选项指定容器的网络模式,docker有以下4种网络模式:
· host模式,使用--net=host指定。
· container模式,使用--net=container:name_or_id指定。
· none模式,使用--net=none指定。
· bridge模式,使用--net=bridge指定,默认设置。
1 host模式
众所周知,docker使用了linux的namespaces技术来进行资源隔离,如pid namespace隔离进程,mount namespace隔离文件系统,network namespace隔离网络等。一个network namespace提供了一份独立的网络环境,包括网卡、路由、iptable规则等都与其他的network namespace隔离。一个docker容器一般会分配一个独立的network namespace。但如果启动容器的时候使用host模式,那么这个容器将不会获得一个独立的network namespace,而是和宿主机共用一个network namespace。容器将不会虚拟出自己的网卡,配置自己的ip等,而是使用宿主机的ip和端口。
例如,我们在10.10.101.105/24的机器上用host模式启动一个含有web应用的docker容器,监听tcp80端口。当我们在容器中执行任何类似ifconfig命令查看网络环境时,看到的都是宿主机上的信息。而外界访问容器中的应用,则直接使用10.10.101.105:80即可,不用任何nat转换,就如直接跑在宿主机中一样。但是,容器的其他方面,如文件系统、进程列表等还是和宿主机隔离的。
2 container模式
在理解了host模式后,这个模式也就好理解了。这个模式指定新创建的容器和已经存在的一个容器共享一个network namespace,而不是和宿主机共享。新创建的容器不会创建自己的网卡,配置自己的ip,而是和一个指定的容器共享ip、端口范围等。同样,两个容器除了网络方面,其他的如文件系统、进程列表等还是隔离的。两个容器的进程可以通过lo网卡设备通信。
3 none模式
这个模式和前两个不同。在这种模式下,docker容器拥有自己的network namespace,但是,并不为docker容器进行任何网络配置。也就是说,这个docker容器没有网卡、ip、路由等信息。需要我们自己为docker容器添加网卡、配置ip等。
4 bridge模式
bridge模式是docker默认的网络设置,此模式会为每一个容器分配network namespace、设置ip等,并将一个主机上的docker容器连接到一个虚拟网桥上。下面着重介绍一下此模式。
host模式
使用docker run时使用–net=host指定
docker使用的网络实际上和宿主机一样,在容器内看到的网卡ip是宿主机上的ip。
[root@localhost ~]# docker run -it --rm --net=host centos_with_net bash
–rm,退出镜像时同时删除该镜像
[root@localhost /]# ifconfig docker0: flags=4163<up,broadcast,running,multicast> mtu 1500 inet 172.17.42.1 netmask 255.255.0.0 broadcast 0.0.0.0 inet6 fe80::8cfc:c7ff:fe49:f1ae prefixlen 64 scopeid 0x20<link> ether 4e:90:a4:b6:91:91 txqueuelen 0 (ethernet) rx packets 58 bytes 3820 (3.7 kib) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 6 bytes 468 (468.0 b) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<up,broadcast,running,multicast> mtu 1500 inet 192.168.1.179 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::20c:29ff:fedb:b228 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:db:b2:28 txqueuelen 1000 (ethernet) rx packets 10562 bytes 868003 (847.6 kib) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 2985 bytes 390673 (381.5 kib) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<up,loopback,running> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (local loopback) rx packets 16 bytes 960 (960.0 b) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 16 bytes 960 (960.0 b) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth5446780: flags=4163<up,broadcast,running,multicast> mtu 1500 inet6 fe80::c0f4:f5ff:fe71:f3bd prefixlen 64 scopeid 0x20<link> ether c2:f4:f5:71:f3:bd txqueuelen 0 (ethernet) rx packets 7 bytes 558 (558.0 b) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 49 bytes 3894 (3.8 kib) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth111b1ca: flags=4163<up,broadcast,running,multicast> mtu 1500 inet6 fe80::4c90:a4ff:feb6:9191 prefixlen 64 scopeid 0x20<link> ether 4e:90:a4:b6:91:91 txqueuelen 0 (ethernet) rx packets 7 bytes 558 (558.0 b) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 13 bytes 1026 (1.0 kib) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth55dbbb2: flags=4163<up,broadcast,running,multicast> mtu 1500 inet6 fe80::c84d:9ff:fecd:da27 prefixlen 64 scopeid 0x20<link> ether ca:4d:09:cd:da:27 txqueuelen 0 (ethernet) rx packets 7 bytes 558 (558.0 b) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 42 bytes 3336 (3.2 kib) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth5e2dff4: flags=4163<up,broadcast,running,multicast> mtu 1500 inet6 fe80::9465:1bff:fed2:f75d prefixlen 64 scopeid 0x20<link> ether 96:65:1b:d2:f7:5d txqueuelen 0 (ethernet) rx packets 7 bytes 558 (558.0 b) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 20 bytes 1584 (1.5 kib) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth628d605: flags=4163<up,broadcast,running,multicast> mtu 1500 inet6 fe80::5cc8:ebff:fedb:ea69 prefixlen 64 scopeid 0x20<link> ether 5e:c8:eb:db:ea:69 txqueuelen 0 (ethernet) rx packets 7 bytes 558 (558.0 b) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 6 bytes 468 (468.0 b) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth991629e: flags=4163<up,broadcast,running,multicast> mtu 1500 inet6 fe80::b464:e5ff:fed5:1bd6 prefixlen 64 scopeid 0x20<link> ether b6:64:e5:d5:1b:d6 txqueuelen 0 (ethernet) rx packets 7 bytes 558 (558.0 b) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 27 bytes 2142 (2.0 kib) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethb086b1c: flags=4163<up,broadcast,running,multicast> mtu 1500 inet6 fe80::dcdf:66ff:fed8:f2df prefixlen 64 scopeid 0x20<link> ether de:df:66:d8:f2:df txqueuelen 0 (ethernet) rx packets 8 bytes 636 (636.0 b) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 34 bytes 2700 (2.6 kib) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@localhost /]# exit exit
与宿主机的ip信息对比
[root@localhost ~]# ifconfig docker0 link encap:ethernet hwaddr 4e:90:a4:b6:91:91 inet addr:172.17.42.1 bcast:0.0.0.0 mask:255.255.0.0 inet6 addr: fe80::8cfc:c7ff:fe49:f1ae/64 scope:link up broadcast running multicast mtu:1500 metric:1 rx packets:58 errors:0 dropped:0 overruns:0 frame:0 tx packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 rx bytes:3820 (3.7 kib) tx bytes:468 (468.0 b) eth0 link encap:ethernet hwaddr 00:0c:29:db:b2:28 inet addr:192.168.1.179 bcast:192.168.1.255 mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fedb:b228/64 scope:link up broadcast running multicast mtu:1500 metric:1 rx packets:10661 errors:0 dropped:0 overruns:0 frame:0 tx packets:3012 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 rx bytes:876797 (856.2 kib) tx bytes:398049 (388.7 kib) lo link encap:local loopback inet addr:127.0.0.1 mask:255.0.0.0 inet6 addr: ::1/128 scope:host up loopback running mtu:65536 metric:1 rx packets:16 errors:0 dropped:0 overruns:0 frame:0 tx packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 rx bytes:960 (960.0 b) tx bytes:960 (960.0 b) veth5e2dff4 link encap:ethernet hwaddr 96:65:1b:d2:f7:5d inet6 addr: fe80::9465:1bff:fed2:f75d/64 scope:link up broadcast running multicast mtu:1500 metric:1 rx packets:7 errors:0 dropped:0 overruns:0 frame:0 tx packets:20 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 rx bytes:558 (558.0 b) tx bytes:1584 (1.5 kib) vethb086b1c link encap:ethernet hwaddr de:df:66:d8:f2:df inet6 addr: fe80::dcdf:66ff:fed8:f2df/64 scope:link up broadcast running multicast mtu:1500 metric:1 rx packets:8 errors:0 dropped:0 overruns:0 frame:0 tx packets:34 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 rx bytes:636 (636.0 b) tx bytes:2700 (2.6 kib) veth55dbbb2 link encap:ethernet hwaddr ca:4d:09:cd:da:27 inet6 addr: fe80::c84d:9ff:fecd:da27/64 scope:link up broadcast running multicast mtu:1500 metric:1 rx packets:7 errors:0 dropped:0 overruns:0 frame:0 tx packets:42 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 rx bytes:558 (558.0 b) tx bytes:3336 (3.2 kib) veth111b1ca link encap:ethernet hwaddr 4e:90:a4:b6:91:91 inet6 addr: fe80::4c90:a4ff:feb6:9191/64 scope:link up broadcast running multicast mtu:1500 metric:1 rx packets:7 errors:0 dropped:0 overruns:0 frame:0 tx packets:13 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 rx bytes:558 (558.0 b) tx bytes:1026 (1.0 kib) veth628d605 link encap:ethernet hwaddr 5e:c8:eb:db:ea:69 inet6 addr: fe80::5cc8:ebff:fedb:ea69/64 scope:link up broadcast running multicast mtu:1500 metric:1 rx packets:7 errors:0 dropped:0 overruns:0 frame:0 tx packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 rx bytes:558 (558.0 b) tx bytes:468 (468.0 b) veth991629e link encap:ethernet hwaddr b6:64:e5:d5:1b:d6 inet6 addr: fe80::b464:e5ff:fed5:1bd6/64 scope:link up broadcast running multicast mtu:1500 metric:1 rx packets:7 errors:0 dropped:0 overruns:0 frame:0 tx packets:27 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 rx bytes:558 (558.0 b) tx bytes:2142 (2.0 kib) veth5446780 link encap:ethernet hwaddr c2:f4:f5:71:f3:bd inet6 addr: fe80::c0f4:f5ff:fe71:f3bd/64 scope:link up broadcast running multicast mtu:1500 metric:1 rx packets:7 errors:0 dropped:0 overruns:0 frame:0 tx packets:49 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 rx bytes:558 (558.0 b) tx bytes:3894 (3.8 kib)
container模式
使用–net=container:container_id/container_name,多个容器使用共同的网络看到的ip是一样的。
[root@localhost ~]# docker ps container id image command created status ports names 7169e8be6d3e centos "/bin/bash" about an hour ago up about an hour serene_goldstine 4cd696928bbe centos "bash" about an hour ago up about an hour cent_testv2 4f5bf6f33f2c centos "bash" about an hour ago up about an hour gloomy_colden 0a80861145c9 centos "bash" about an hour ago up about an hour mad_carson fb45150dbc21 centos "bash" about an hour ago up about an hour cent_testv 3222c7c5c456 centos "bash" 2 hours ago up 2 hours sick_albattani e136b27a8e17 centos "bash" 2 hours ago up 2 hours tender_euclid [root@localhost ~]# docker exec -it 7169 bash [root@7169e8be6d3e /]# ifconfig bash: ifconfig: command not found [root@7169e8be6d3e /]# yum install -y net-tools ifconfig [root@7169e8be6d3e /]# ifconfig eth0: flags=4163<up,broadcast,running,multicast> mtu 1500 inet 172.17.0.8 netmask 255.255.0.0 broadcast 0.0.0.0 inet6 fe80::42:acff:fe11:8 prefixlen 64 scopeid 0x20<link> ether 02:42:ac:11:00:08 txqueuelen 0 (ethernet) rx packets 5938 bytes 15420209 (14.7 mib) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 4841 bytes 329652 (321.9 kib) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<up,loopback,running> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (local loopback) rx packets 0 bytes 0 (0.0 b) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 0 bytes 0 (0.0 b) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@7169e8be6d3e /]# exit exit [root@localhost ~]# docker run -it --rm --net=container:7169 centos_with_net bash [root@7169e8be6d3e /]# ifconfig eth0: flags=4163<up,broadcast,running,multicast> mtu 1500 inet 172.17.0.8 netmask 255.255.0.0 broadcast 0.0.0.0 inet6 fe80::42:acff:fe11:8 prefixlen 64 scopeid 0x20<link> ether 02:42:ac:11:00:08 txqueuelen 0 (ethernet) rx packets 5942 bytes 15420377 (14.7 mib) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 4855 bytes 330480 (322.7 kib) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<up,loopback,running> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (local loopback) rx packets 0 bytes 0 (0.0 b) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 0 bytes 0 (0.0 b) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0
none模式
使用–net=none指定,这种模式下不会配置任何网络。
[root@localhost ~]# docker run -it --rm --net=none centos_with_net bash [root@67d037935636 /]# ifconfig lo: flags=73<up,loopback,running> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (local loopback) rx packets 0 bytes 0 (0.0 b) rx errors 0 dropped 0 overruns 0 frame 0 tx packets 0 bytes 0 (0.0 b) tx errors 0 dropped 0 overruns 0 carrier 0 collisions 0
bridge模式(默认模式)
使用–net=bridge指定,不用指定默认就是这种网络模式。这种模式会为每个容器分配一个独立的network namespace。类似于vmware的nat网络模式。同一个宿主机上的所有容器会在同一个网段下,相互之间是可以通信的。
感谢阅读,希望能帮助到大家,谢谢大家对本站的支持!