欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

docker(7、容器网络4) flannel 网络(方式一) yum安装etcd单节点 yum安装flannel

程序员文章站 2022-03-27 08:40:40
...

1、概述

lannel 是 CoreOS 开发的容器网络解决方案。flannel 为每个 host 分配一个 subnet,容器从此 subnet 中分配 IP,这些 IP 可以在 host 间路由,容器间无需 NAT 和 port mapping 就可以跨主机通信。
 

每个 subnet 都是从一个更大的 IP 池中划分的,flannel 会在每个主机上运行一个叫 flanneld 的 agent,其职责就是从池子中分配 subnet。为了在各个主机间共享信息,flannel 用 etcd(与 consul 类似的 key-value 分布式数据库)存放网络配置、已分配的 subnet、host 的 IP 等信息。

数据包如何在主机间转发是由 backend 实现的。flannel 提供了多种 backend,最常用的有 vxlan 和 host-gw,其他 backend 请参考 https://github.com/coreos/flannel

Flannel工作原理流程图如下 (默认的节点间数据通信方式是UDP转发;  flannel默认使用8285端口作为UDP封装报文的端口,VxLan使用8472端口)

docker(7、容器网络4) flannel 网络(方式一) yum安装etcd单节点 yum安装flannel

对上图的简单说明 (Flannel的工作原理可以解释如下):
-> 数据从源容器中发出后,经由所在主机的docker0虚拟网卡转发到flannel0虚拟网卡,这是个P2P的虚拟网卡,flanneld服务监听在网卡的另外一端。
-> Flannel通过Etcd服务维护了一张节点间的路由表,该张表里保存了各个节点主机的子网网段信息。
-> 源主机的flanneld服务将原本的数据内容UDP封装后根据自己的路由表投递给目的节点的flanneld服务,数据到达以后被解包,然后直接进入目的节点的flannel0虚拟网卡,然后被转发到目的主机的docker0虚拟网卡,最后就像本机容器通信一样的由docker0路由到达目标容器。

这样整个数据包的传递就完成了,这里需要解释三个问题:
1) UDP封装是怎么回事?
在UDP的数据内容部分其实是另一个ICMP(也就是ping命令)的数据包。原始数据是在起始节点的Flannel服务上进行UDP封装的,投递到目的节点后就被另一端的Flannel服务
还原成了原始的数据包,两边的Docker服务都感觉不到这个过程的存在。

2) 为什么每个节点上的Docker会使用不同的IP地址段?
这个事情看起来很诡异,但真相十分简单。其实只是单纯的因为Flannel通过Etcd分配了每个节点可用的IP地址段后,偷偷的修改了Docker的启动参数。
在运行了Flannel服务的节点上可以查看到Docker服务进程运行参数(ps aux|grep docker|grep "bip"),例如“--bip=182.48.25.1/24”这个参数,它限制了所在节
点容器获得的IP范围。这个IP范围是由Flannel自动分配的,由Flannel通过保存在Etcd服务中的记录确保它们不会重复。

3) 为什么在发送节点上的数据会从docker0路由到flannel0虚拟网卡,在目的节点会从flannel0路由到docker0虚拟网卡?
例如现在有一个数据包要从IP为172.17.18.2的容器发到IP为172.17.46.2的容器。根据数据发送节点的路由表,它只与172.17.0.0/16匹配这条记录匹配,因此数据从docker0
出来以后就被投递到了flannel0。同理在目标节点,由于投递的地址是一个容器,因此目的地址一定会落在docker0对于的172.17.46.0/24这个记录上,自然的被投递到了docker0网卡。

测试环境:方式1

1)机器环境(centos7系统)本环境单etcd的部署(可以etcd集群部署)

1

2

3

192.168.1.121     部署etcd,  也可以安装flannel,docker    主机名:etcd   

192.168.1.122     部署flannel,docker                主机名:host1  

192.168.1.123               部署flannel,docker                主机名:host1

2)etcd(192.168.1.121)机器操作

设置主机名及绑定hosts
[aaa@qq.com ~]# hostnamectl --static set-hostname  etcd
[aaa@qq.com ~]# vim /etc/hosts
192.168.1.121    etcd
192.168.1.122    host1
192.168.1.123    host2  
关闭防火墙,如果开启防火墙,则最好打开2379和4001端口
[aaa@qq.com ~]# systemctl disable firewalld.service
[aaa@qq.com ~]# systemctl stop firewalld.service  
安装etcd
k8s运行依赖etcd,需要先部署etcd,下面采用yum方式安装:
[aaa@qq.com ~]# yum install etcd -y    
yum安装的etcd默认配置文件在/etc/etcd/etcd.conf,编辑配置文件:
[aaa@qq.com ~]# cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.bak
[aaa@qq.com ~]# cat /etc/etcd/etcd.conf
#[member]
ETCD_NAME=master                                            #节点名称
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"                  #数据存放位置
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"             #监听客户端地址
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"           #通知客户端地址
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""    
启动etcd并验证状态
[aaa@qq.com ~]# systemctl start etcd
[aaa@qq.com ~]# systemctl enable etcd      
[aaa@qq.com ~]# ps -ef|grep etcd
etcd     28145     1  1 14:38 ?        00:00:00 /usr/bin/etcd --name=master --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001
root     28185 24819  0 14:38 pts/1    00:00:00 grep --color=auto etcd
[aaa@qq.com ~]# lsof -i:2379
COMMAND   PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
etcd    28145 etcd    6u  IPv6 1283822      0t0  TCP *:2379 (LISTEN)
etcd    28145 etcd   18u  IPv6 1284133      0t0  TCP localhost:53203->localhost:2379 (ESTABLISHED)
........
    
[aaa@qq.com ~]# etcdctl set testdir/testkey0 0
0
[aaa@qq.com ~]# etcdctl get testdir/testkey0
0
[aaa@qq.com ~]# etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy
[aaa@qq.com ~]# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy
  
配置flannel的网络信息
配置etcd中关于flannel的key(这个只在安装了etcd的机器上操作)
Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置('/atomic.io/network/config'这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错):
[aaa@qq.com ~]# etcdctl mk /atomic.io/network/config '{ "Network": "182.48.0.0/16" }'
{ "Network": "182.48.0.0/16" }
获取配置的网络信息
[aaa@qq.com ~]# etcdctl get /atomic.io/network/config 
{ "Network": "172.18.0.0/16" }
删除网络信息
[aaa@qq.com ~]# etcdctl rm  /atomic.io/network/config
温馨提示:上面flannel设置的ip网段可以任意设定,随便设定一个网段都可以。容器的ip就是根据这个网段进行自动分配的,ip分配后,容器一般是可以对外联网的(网桥模式,只要宿主机能上网就可以)
上面配置的知识网段和子网要是配置网段的最大范围和支持的网络支持的后端模式
#####################################################
etcdctl  set /atomic.io/network/config '{"NetWork":"10.0.0.0/16", "SubnetMin": "10.0.1.0", "SubnetMax": "10.0.20.0","Backend": {"Type": "vxlan"}}'

Network: 用于指定Flannel地址池, 整个overlay网络为10.0.0.0/16网段.
SubnetLen: 用于指定分配给单个宿主机的docker0的ip段的子网掩码的长度
SubnetMin: 用于指定最小能够分配的ip段
SudbnetMax: 用于指定最大能够分配的ip段,在上面的示例中,表示每个宿主机可以分配一个24位掩码长度的子网,可以分配的子网从10.0.1.0/24到10.0.20.0/24,也就意味着在这个网段中,最多只能有20台宿主机
Backend: 用于指定数据包以什么方式转发,默认为udp模式, 这里使用的是vxlan模式


############
############
可以在etcd的节点上安装和配置flannel和docker
安装覆盖网络Flannel
[aaa@qq.com ~]# yum install flannel -y   
配置Flannel
[aaa@qq.com ~]# cp /etc/sysconfig/flanneld /etc/sysconfig/flanneld.bak
[aaa@qq.com ~]# vim /etc/sysconfig/flanneld
# Flanneld configuration options
   
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
   
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
   
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""  
启动Flannel
[aaa@qq.com ~]# systemctl enable flanneld.service
[aaa@qq.com ~]# systemctl start flanneld.service
[aaa@qq.com ~]# ps -ef|grep flannel
root      9305  9085  0 09:12 pts/2    00:00:00 grep --color=auto flannel
root     28876     1  0 May15 ?        00:00:07 /usr/bin/flanneld -etcd-endpoints=http://etcd:2379 -etcd-prefix=/atomic.io/network
[aaa@qq.com ~]# yum install docker -y
启动Flannel后,一定要记得重启docker,这样Flannel配置分配的ip才能生效,即docker0虚拟网卡的ip会变成上面flannel设定的ip段
[aaa@qq.com ~]# systemctl restart docker

3)host1(192.168.1.122)和 host2(192.168.1.123)机器操作

设置主机名及绑定hosts
[aaa@qq.com ~]# hostnamectl --static set-hostname  host1
[aaa@qq.com ~]# vim /etc/hosts
192.168.1.121    etcd
192.168.1.122    host1
192.168.1.123    host2  
关闭防火墙,如果开启防火墙,则最好打开2379和4001端口
[aaa@qq.com ~]# systemctl disable firewalld.service
[aaa@qq.com ~]# systemctl stop firewalld.service 
安装覆盖网络Flannel
[aaa@qq.com ~]# yum install flannel -y  
配置Flannel
[aaa@qq.com ~]# cp /etc/sysconfig/flanneld /etc/sysconfig/flanneld.bak
[aaa@qq.com ~]# vim /etc/sysconfig/flanneld
# Flanneld configuration options
   
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
   
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
   
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""  
启动Flannel
[aaa@qq.com ~]# systemctl enable flanneld.service
[aaa@qq.com ~]# systemctl start flanneld.service
[aaa@qq.com ~]# ps -ef|grep flannel
root      3841  9649  0 09:11 pts/0    00:00:00 grep --color=auto flannel
root     28995     1  0 May15 ?        00:00:07 /usr/bin/flanneld -etcd-endpoints=http://etcd:2379 -etcd-prefix=/atomic.io/network
安装docker环境
[aaa@qq.com ~]# yum install -y docker
启动Flannel后,一定要记得后安装重启docker,这样Flannel配置分配的ip才能生效,即docker0虚拟网卡的ip会变成上面flannel设定的ip段 etcd节点先安装docker后重启不会出现docker0还是172.17.0.1的ip
[aaa@qq.com ~]# systemctl restart docker

测试两个node中容器跨网络通信 可以看到不同node节点的容器之间可以正常通信

host1网络信息

[aaa@qq.com ~]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=172.18.0.0/16
FLANNEL_SUBNET=172.18.89.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
[aaa@qq.com ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:99:bb:09 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.123/24 brd 192.168.1.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::6fc0:38a2:3482:c75f/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none 
    inet 172.18.89.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
    inet6 fe80::7a1:bc34:a125:5d98/64 scope link flags 800 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:4b:a8:9d:62 brd ff:ff:ff:ff:ff:ff
    inet 172.18.89.1/24 scope global docker0
       valid_lft forever preferred_lft forever
[aaa@qq.com ~]# ip r 
#发现没有172.17.0.1的docker网络了 是因为获取了flannel的网段
default via 192.168.1.1 dev ens33 proto static metric 100 
172.18.0.0/16 dev flannel0 proto kernel scope link src 172.18.89.0 
172.18.89.0/24 dev docker0 proto kernel scope link src 172.18.89.1 
192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.122 metric 100 

 host2网络信息

[aaa@qq.com ~]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=172.18.0.0/16
FLANNEL_SUBNET=172.18.92.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
[aaa@qq.com ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:99:bb:09 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.123/24 brd 192.168.1.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::6fc0:38a2:3482:c75f/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none 
    inet 172.18.92.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
    inet6 fe80::7a1:bc34:a125:5d98/64 scope link flags 800 
       valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default 
    link/ether 02:42:4b:a8:9d:62 brd ff:ff:ff:ff:ff:ff
    inet 172.18.92.1/24 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:4bff:fea8:9d62/64 scope link 
       valid_lft forever preferred_lft forever
6: aaa@qq.com: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue master docker0 state UP group default 
    link/ether 06:85:1e:64:64:a9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::485:1eff:fe64:64a9/64 scope link 
       valid_lft forever preferred_lft forever
[aaa@qq.com ~]# ip r
#发现没有172.17.0.1的docker网络了 是因为获取了flannel的网段
default via 192.168.1.1 dev ens33 proto static metric 100 
172.18.0.0/16 dev flannel0 proto kernel scope link src 172.18.92.0 
172.18.92.0/24 dev docker0 proto kernel scope link src 172.18.92.1 
192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.123 metric 100 

[aaa@qq.com ~]# ps aux|grep docker|grep "bip"
root      10723  0.1  3.5 666700 35100 ?        Ssl  05:28   0:17 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json --selinux-enabled --log-driver=journald --signature-verification=false --storage-driver overlay2 --bip=172.18.92.1/24 --ip-masq=true --mtu=1472
#“--bip=172.18.92.1/24”这个参数,它限制了所在节点容器获得的IP范围。 该IP范围是由Flannel自动分配的,由Flannel通过保存在Etcd服务中的记录确保它们不会重复。

创建容器,验证跨主机容器之间的网络联通性

host1上的容器host1.test 的ip是172.18.89.2

host2上的容器host2.test的ip是172.18.92.2

[aaa@qq.com ~]# docker run -it --name host1.test busybox
Unable to find image 'busybox:latest' locally
Trying to pull repository docker.io/library/busybox ... 
latest: Pulling from docker.io/library/busybox
0669b0daf1fb: Already exists 
Digest: sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135
Status: Downloaded newer image for docker.io/busybox:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
5: aaa@qq.com: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1472 qdisc noqueue 
    link/ether 02:42:ac:12:59:02 brd ff:ff:ff:ff:ff:ff
    inet 172.18.89.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe12:5902/64 scope link 
       valid_lft forever preferred_lft forever
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
5: aaa@qq.com: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1472 qdisc noqueue 
    link/ether 02:42:ac:12:59:02 brd ff:ff:ff:ff:ff:ff
    inet 172.18.89.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe12:5902/64 scope link 
       valid_lft forever preferred_lft forever
/ # ping 172.18.92.2
PING 172.18.92.2 (172.18.92.2): 56 data bytes
64 bytes from 172.18.92.2: seq=15 ttl=60 time=12.671 ms
64 bytes from 172.18.92.2: seq=16 ttl=60 time=1.132 ms
64 bytes from 172.18.92.2: seq=17 ttl=60 time=2.450 ms
64 bytes from 172.18.92.2: seq=18 ttl=60 time=2.030 ms
64 bytes from 172.18.92.2: seq=19 ttl=60 time=1.314 ms
64 bytes from 172.18.92.2: seq=20 ttl=60 time=3.762 ms
^C
--- 172.18.92.2 ping statistics ---
21 packets transmitted, 6 packets received, 71% packet loss
round-trip min/avg/max = 1.132/3.893/12.671 ms

 

[aaa@qq.com ~]# docker run -it --name host2.test busybox
Unable to find image 'busybox:latest' locally
Trying to pull repository docker.io/library/busybox ... 
latest: Pulling from docker.io/library/busybox
0669b0daf1fb: Pull complete 
Digest: sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135
Status: Downloaded newer image for docker.io/busybox:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
5: aaa@qq.com: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1472 qdisc noqueue 
    link/ether 02:42:ac:12:5c:02 brd ff:ff:ff:ff:ff:ff
    inet 172.18.92.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe12:5c02/64 scope link 
       valid_lft forever preferred_lft forever
/ # ping 172.18.89.2
PING 172.18.89.2 (172.18.89.2): 56 data bytes
64 bytes from 172.18.89.2: seq=0 ttl=60 time=52.863 ms
64 bytes from 172.18.89.2: seq=1 ttl=60 time=1.743 ms
^C
--- 172.18.89.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.743/27.303/52.863 ms
如上面操作后,发现各容器内分配的ip之间相互ping不通,基本就是由于防火墙问题引起的!
可是明明已经在前面部署的时候,通过"systemctl stop firewalld.service"关闭了防火墙,为什么还有防火墙问题??
这是因为linux还有底层的iptables,所以解决办法是在各节点上执行下面操作:
[aaa@qq.com ~]# iptables -P INPUT ACCEPT
[aaa@qq.com ~]# iptables -P FORWARD ACCEPT
[aaa@qq.com ~]# iptables -F
[aaa@qq.com ~]# iptables -L -n
 
执行上面操作后,基本各容器间就能相互ping通了。
docker通过Flannel可以实现各容器间的相互通信,即宿主机和容器,容器和容器之间都能相互通信。