【云原生|Docker系列3】Docker网络模式详解

发布于:2023-01-22 ⋅ 阅读:(407) ⋅ 点赞:(0)

前言

Docker 容器和服务如此强大的原因之一是您可以将它们连接在一起,或者将它们连接到非 Docker 工作负载。本文主要介绍Docker的四种网络模式:Brideg模式、Host模式、Container模式、None模式。

列出容器当前网络:

[root@sanxingtongxue ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
e9f64abf8207   bridge    bridge    local
e1e8a39e210f   host      host      local
c8092bd3f577   none      null      local

Bridge模式(自定义网络模式演示)

docker 的默认⽹络模式为bridge模式,此模式会为每一个容器分配,设置IP等,并将容器连接到一个docker0 的虚拟网桥,通过docker0 网桥以及iptables nat 表配置与宿主机通信。示例图如下:
请添加图片描述
桥接网络演示步骤如下:
1.创建alpine-net网络。您不需要该–driver bridge标志,因为它是默认标志。

 docker network create --driver bridge alpine-net

2.创建两个容器。注意–network标志。执行命令时只能连接一个网络docker run。

 docker run -dit --name alpine1 --network alpine-net alpine ash
 docker run -dit --name alpine2 --network alpine-net alpine ash

3.ping命令验证容器之前是否互通。

ping -c 2 alpine2

操作演示如下:

[root@sanxingtongxue ~]#  docker network create --driver bridge alpine-net
5cd703a37a83910502d6f4eff1d047c936434a0e77d27125865acfffba49a0e0
[root@sanxingtongxue ~]# docker network ls 
NETWORK ID     NAME         DRIVER    SCOPE
5cd703a37a83   alpine-net   bridge    local
e9f64abf8207   bridge       bridge    local
e1e8a39e210f   host         host      local
c8092bd3f577   none         null      local
[root@sanxingtongxue ~]#  docker network inspect alpine-net
[
    {
        "Name": "alpine-net",
        "Id": "5cd703a37a83910502d6f4eff1d047c936434a0e77d27125865acfffba49a0e0",
        "Created": "2022-08-10T10:58:28.892947982+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
[root@sanxingtongxue ~]# docker ps  -a 
CONTAINER ID   IMAGE      COMMAND   CREATED          STATUS    PORTS      NAMES
203dcb1d64b9   hello:v1   "ash"     21 seconds ago   Created   8000/tcp   alpine1
[root@sanxingtongxue ~]# docker rm -f $(docker ps -aq)  #删除所有容器
203dcb1d64b9
[root@sanxingtongxue ~]#  docker run -dit --name alpine1 --network alpine-net hello:v1 sh
6eee1dd7a27ac9823a8740f330ec2abfc55898e1c39ac5777180ddb6c1a68bee
[root@sanxingtongxue ~]#  docker run -dit --name alpine2 --network alpine-net hello:v1 sh
de43dfe6ba727fede28837e7392b7cbb45615443f1ce8bf8041c43ddd5679ac8
[root@sanxingtongxue ~]# docker ps 
CONTAINER ID   IMAGE      COMMAND   CREATED         STATUS         PORTS      NAMES
de43dfe6ba72   hello:v1   "sh"      2 minutes ago   Up 2 minutes   8000/tcp   alpine2
6eee1dd7a27a   hello:v1   "sh"      3 minutes ago   Up 3 minutes   8000/tcp   alpine1
[root@sanxingtongxue ~]# docker container ls 
CONTAINER ID   IMAGE      COMMAND   CREATED         STATUS         PORTS      NAMES
de43dfe6ba72   hello:v1   "sh"      2 minutes ago   Up 2 minutes   8000/tcp   alpine2
6eee1dd7a27a   hello:v1   "sh"      3 minutes ago   Up 3 minutes   8000/tcp   alpine1
[root@sanxingtongxue ~]# docker exec -it alpine2 /bin/bash
root@de43dfe6ba72:/# ping -c 2 alpine2
bash: ping: command not found
root@de43dfe6ba72:/# apt-get update && apt-get install iputils-ping
root@de43dfe6ba72:/# ping -c 2 alpine1
PING alpine1 (172.18.0.2) 56(84) bytes of data.
64 bytes from alpine1.alpine-net (172.18.0.2): icmp_seq=1 ttl=64 time=0.081 ms
64 bytes from alpine1.alpine-net (172.18.0.2): icmp_seq=2 ttl=64 time=0.061 ms

--- alpine1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1015ms
rtt min/avg/max/mdev = 0.061/0.071/0.081/0.010 ms
[root@sanxingtongxue ~]# docker exec -it alpine1 /bin/bash
root@6eee1dd7a27a:/# ping -c 2 alpine2
PING alpine2 (172.18.0.3) 56(84) bytes of data.
64 bytes from alpine2.alpine-net (172.18.0.3): icmp_seq=1 ttl=64 time=0.059 ms
64 bytes from alpine2.alpine-net (172.18.0.3): icmp_seq=2 ttl=64 time=0.061 ms

--- alpine2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1024ms
rtt min/avg/max/mdev = 0.059/0.060/0.061/0.001 ms
[root@sanxingtongxue ~]# docker network inspect alpine-net
[
    {
        "Name": "alpine-net",
        "Id": "5cd703a37a83910502d6f4eff1d047c936434a0e77d27125865acfffba49a0e0",
        "Created": "2022-08-10T10:58:28.892947982+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "6eee1dd7a27ac9823a8740f330ec2abfc55898e1c39ac5777180ddb6c1a68bee": {
                "Name": "alpine1",
                "EndpointID": "06f1c1f638f6d12a064d5738e45671cf613c6c38f3c5ae90c8c3480d9655be89",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            },
            "de43dfe6ba727fede28837e7392b7cbb45615443f1ce8bf8041c43ddd5679ac8": {
                "Name": "alpine2",
                "EndpointID": "be79f2f3d29870455b3faa40d8bf3b483bef0a6f0b32aaa43473f670e94ea5da",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

官网参考:https://docs.docker.com/network/network-tutorial-standalone/

Host模式

容器不会虚拟出自己的网卡,配置主机的IP等,而是使用宿主机的IP和端口。此时映射的端口可能会生产冲突,但是容器的其余部分(文件系统、进程等)依然是隔离的,此时容器与宿主机共享网络。示例图如下:
请添加图片描述

演示步骤如下:
1.创建并启动容器作为分离的进程。该–rm选项意味着一旦容器退出/停止就将其移除。该-d标志意味着启动容器分离(在后台)。

 docker run --rm -d --network host --name my_nginx nginx

2.通过浏览 http://localhost:80/访问 Nginx 。

3.使用以下命令检查您的网络堆栈:
检查所有网络接口并确认没有创建新的。

 ip addr show

使用命令验证哪个进程绑定到端口 80 netstat。您需要使用sudo该进程,因为该进程归 Docker 守护程序用户所有,否则您将无法看到其名称或 PID。

 sudo netstat -tulpn | grep :80

4.停止容器。它将在使用该–rm选项启动时自动删除。

docker container stop my_nginx

操作演示如下:

[root@sanxingtongxue ~]#  docker run --rm -d --network host --name myhost_net hello:v1 
7c1a1e03b12e30504fb3bd2f0f814209ac9d65cac525a2475eb40f36e34b97ef
[root@sanxingtongxue ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.19.63.253   0.0.0.0         UG    100    0        0 eth0
10.88.0.0       0.0.0.0         255.255.0.0     U     0      0        0 cni-podman0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-5cd703a37a83
172.19.0.0      0.0.0.0         255.255.192.0   U     100    0        0 eth0
[root@sanxingtongxue ~]# docker ps 
CONTAINER ID   IMAGE      COMMAND                  CREATED              STATUS              PORTS      NAMES
7c1a1e03b12e   hello:v1   "/bin/sh -c 'flask r…"   About a minute ago   Up About a minute              myhost_net
de43dfe6ba72   hello:v1   "sh"                     33 minutes ago       Up 33 minutes       8000/tcp   alpine2
6eee1dd7a27a   hello:v1   "sh"                     35 minutes ago       Up 22 minutes       8000/tcp   alpine1
[root@sanxingtongxue ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:16:3e:02:b2:a8 brd ff:ff:ff:ff:ff:ff
    inet 172.19.55.145/18 brd 172.19.63.255 scope global dynamic noprefixroute eth0
       valid_lft 291075677sec preferred_lft 291075677sec
    inet6 fe80::216:3eff:fe02:b2a8/64 scope link 
       valid_lft forever preferred_lft forever
3: cni-podman0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 4a:22:90:68:12:aa brd ff:ff:ff:ff:ff:ff
    inet 10.88.0.1/16 brd 10.88.255.255 scope global cni-podman0
       valid_lft forever preferred_lft forever
    inet6 fe80::4822:90ff:fe68:12aa/64 scope link 
       valid_lft forever preferred_lft forever
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:f4:92:1b:07 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:f4ff:fe92:1b07/64 scope link 
       valid_lft forever preferred_lft forever
87: br-5cd703a37a83: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:32:ab:05:e0 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-5cd703a37a83
       valid_lft forever preferred_lft forever
    inet6 fe80::42:32ff:feab:5e0/64 scope link 
       valid_lft forever preferred_lft forever
93: vethf2c72a3@if92: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-5cd703a37a83 state UP group default 
    link/ether b2:ec:15:6d:d0:b5 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::b0ec:15ff:fe6d:d0b5/64 scope link 
       valid_lft forever preferred_lft forever
95: veth16a58ea@if94: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-5cd703a37a83 state UP group default 
    link/ether 7a:f3:e8:c2:ba:2b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::78f3:e8ff:fec2:ba2b/64 scope link 
       valid_lft forever preferred_lft forever
[root@sanxingtongxue ~]# sudo netstat -tulpn | grep :80
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      552059/python3      
tcp6       0      0 :::801                  :::*                    LISTEN      513791/java         
tcp6       0      0 127.0.0.1:8005          :::*                    LISTEN      513791/java    

在这里插入图片描述

Container模式

创建的容器不会创建自己的网卡,配置自己的IP,而是和一个指定的容器共享IP,端口范围。 kubernetes 中的pod就是多个容器共享一个 Network namespace。同样,两个容器除了⽹络⽅⾯,其他的如⽂件系统、进程列表等还是隔离的。两个容器的进程可以通过 lo ⽹卡设备通信。示例图如下:

请添加图片描述
操作演示如下:

[root@sanxingtongxue ~]# docker run -tid --net=container:alpine1 --name docker_con hello:v1 
0eb9f3fac30d6b979b5138816e633abaa80ee6301b1b12b8f1dde7fd583cf998
[root@sanxingtongxue ~]# docker exec -it docker_con /bin/bash
root@6eee1dd7a27a:/# apt-get update
root@6eee1dd7a27a:/# apt-get install net-tools
root@6eee1dd7a27a:/# ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.18.0.2  netmask 255.255.0.0  broadcast 172.18.255.255
        ether 02:42:ac:12:00:02  txqueuelen 0  (Ethernet)
        RX packets 2413  bytes 4444758 (4.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1924  bytes 176355 (176.3 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 56  bytes 5130 (5.1 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 56  bytes 5130 (5.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

root@6eee1dd7a27a:/# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.18.0.1      0.0.0.0         UG    0      0        0 eth0
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0

None模式

使用none模式,Docker 容器拥有自己的Network Namespace但不进行任何网络配置。
示例图如下:
请添加图片描述
操作演示如下:

[root@sanxingtongxue ~]# docker run -tid --net=none --name docker_none hello:v1 
b97b13864a0a8230a979b712e7676cd82a901105d54e390197f36413c9708f90

小拓展

Bridge

网桥是工作在 TCP/IP 二层协议 的虚拟网络设备,与现实世界中的交换机功能相似。与其他虚拟网络设备一样,可以配置 IP、MAC。Bridge 的主要功能是在多个接入 Bridge 的网络接口间转发数据包。如下图,Bridge (设备名 br0) 充当主设备,连接了四个从设备,分别是 tap1,tap2,veth1,eth0。

在这里插入图片描述

虚拟网卡接口VETH
VETH (virtual Ethernet) 设备总是成对出现,彼此相连。 一个设备从协议栈读取数据后,会将数据发送到另一个设备上去。VETH 可以用于两个 namespace 间的网络通信。如果 VETH 设备对中有一个设备不可用,则整个链路也不可用。

请添加图片描述
在这里插入图片描述
落红不是无情物,化作春泥更护花。

本文含有隐藏内容,请 开通VIP 后查看

网站公告

今日签到

点亮在社区的每一天
去签到