部署高可用5节点 k8s 集群(v1.25.0版本)

发布于:2025-08-15 ⋅ 阅读:(18) ⋅ 点赞:(0)
1、集群节点环境初始化
1.基础环境准备

(1)配置节点主机名和解析

在各个节点上修改主机名并配置 hosts 文件,确保主机名解析正确(通过 /etc/hosts ),使集群各个节点之间可以直接通过主机名互访。

  • 修改各个节点的主机名
$ hostnamectl set-hostname master1
$ hostnamectl set-hostname master2
$ hostnamectl set-hostname master3
$ hostnamectl set-hostname node1
$ hostnamectl set-hostname node2
  • 配置主机名解析,在各个节点的 /etc/hosts 文件中添加集群所有节点的 IP 和主机名映射
$ cat /etc/hosts
192.168.184.60 master1
192.168.184.61 master2
192.168.184.62 master3
192.168.184.63 node1
192.168.184.64 node2

(2)设置免密登录

  • 在各个节点上配置免密登录
$ ssh-keygen
$ ssh-copy-id master1
$ ssh-copy-id master2
$ ssh-copy-id master3
$ ssh-copy-id node1
$ ssh-copy-id node2

 2.系统服务与内核参数调整

(1)禁用 Swap 分区​​

  • kubeadm 部署 k8s 集群时要求禁用 Swap分区,否则部署时无法通过环境检查,kubelet 无法启动,因此各个节点都需要关闭 Swap 分区
$ swapoff -a
$ vim /etc/fstab # 在下行内容前添加注释
#/dev/mapper/centos-swap swap                    swap    defaults  

(2)关闭防火墙和 selinux

  • 所有节点需禁用防火墙(firewalld/ufw)和 SELinux,避免与 Kubernetes 网络规则冲突
$ systemctl stop firewalld && systemctl disable firewalld
$ vim /etc/selinux/config
SELINUX=disabled

(3)内核参数优化​​

1)启用桥接流量转发和 netfilter 规则

  •     加载 br_netfilter 模块​​,并设置每次用户登录时自动加载该模块(加载 br_netfilter 内核模块,使桥接流量能够通过 iptables/ip6tables 规则过滤,这是容器间通信和 NAT 的关键。)
$ modprobe br_netfilter
$ echo "modprobe br_netfilter" >> /etc/profile
  • 创建  /etc/sysctl.d/k8s.conf  文件​​,添加如下网络配置
$ cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

$ sysctl -p /etc/sysctl.d/k8s.conf

2)开启 ipvs 

  • 加载 IPVS 所需的内核模块,确保 kube-proxy 能够使用 IPVS 实现高性能负载均衡
$ vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
 /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
 if [ 0 -eq 0 ]; then
 /sbin/modprobe ${kernel_module}
 fi
done

$ chmod 755 /etc/sysconfig/modules/ipvs.modules

$ bash /etc/sysconfig/modules/ipvs.modules
3.配置镜像源
  • 在所有节点上添加 k8s 源和 docker 源,以便后续安装 k8s 和 docker,这里配置的是阿里云源。
$ yum -y install yum-utils
$ yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
4.设置时间同步
  • 在所有节点上安装时间同步工具,添加定时任务,每小时执行一次时间同步,确保所有节点时间一致,避免证书验证失败
$ yum -y install ntpdate
$ ntpdate cn.pool.ntp.org
$ crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
$ systemctl restart crond.service
5.安装 iptables 和常用基础包
  • 安装 iptables 和常用基础包,以便后续排查问题
# 安装 iptables (如服务器已自带 iptables,可忽略此步骤)
$ yum -y install iptables-services
$ systemctl stop iptables
$ systemctl disable iptables


# 安装基础软件包 (非必要)
$ yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm
2、安装Containerd 和 Docker
  • 安装 Containerd 作为容器引擎。
$ yum -y install containerd.io-1.6.6

$ containerd config default > /etc/containerd/config.toml

$ vim /etc/containerd/config.toml
SystemdCgroup = true  # 将SystemdCgroup = false 改成 true
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7" # 将 "k8s.gcr.io/pause:3.6" 改成 "registry.aliyuncs.com/google_containers/pause:3.7"

$ systemctl enable containerd --now


$ containerd -v
containerd containerd.io 1.6.33 
  • 配置 Containerd 镜像加速器。
$ vim /etc/containerd/config.toml
config_path = "/etc/containerd/certs.d"


$ mkdir /etc/containerd/certs.d/docker.io/ -p

$ cat /etc/containerd/certs.d/docker.io/hosts.toml 
server = "https://registry-1.docker.io"

[host."https://docker.m.daocloud.io"]
capabilities = ["pull","resolve"]

[host."https://ixwmqj0x.mirror.aliyuncs.com"]
capabilities = ["pull","resolve"]


$ systemctl restart containerd
  • 安装 Docker 用于镜像构建,非必要安装,需要构建镜像可安装,这里选择一个节点安装了 docker 。
$ yum -y install docker-ce
$ systemctl enable docker --now

$ docker -v
Docker version 26.1.4

$ containerd -v
containerd containerd.io 1.6.33 

3、安装 k8s 需要的软件包

在每个节点上安装以下软件包:
kubeadm:是一个工具,用来初始化 k8s 集群。
kubelet:安装在集群所有节点上,用于启动 Pod。
kubectl:通过 kubectl 可以部署和管理应用,査看各种资源,创建、删除和更新各种组件。

$ yum install -y kubelet-1.25.0 kubeadm-1.25.0 kubectl-1.25.0
$ systemctl enable kubelet.service 
$ systemctl status kubelet.service 
4、keepalived+nginx 实现节点高可用
  • 在master1 和 master2 上安装 keepalived 和 nginx ,用于做高可用+负载均衡架构。
$ yum install nginx keepalived nginx-mod-stream -y
  • 更新 nginx 配置文件。
$ cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
$ vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
            server 192.168.184.60:6443 weight=5 max_fails=3 fail_timeout=30s;  
            server 192.168.184.61:6443 weight=5 max_fails=3 fail_timeout=30s;
            server 192.168.184.62:6443 weight=5 max_fails=3 fail_timeout=30s;  

    }
    
    server {
       listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}
  • 更新 keepalived 配置文件。

$ cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

# 主节点配置
$ vim /etc/keepalived/keepalived.conf
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER 
    interface eth0  # 修改为实际网卡名
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 100    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    # 虚拟IP
    virtual_ipaddress { 
        192.168.184.65/24
    } 
    track_script {
        check_nginx
    } 
}


# 备节点配置
$ vim /etc/keepalived/keepalived.conf
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state BACKUP
    interface eth0  # 修改为实际网卡名
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 90    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    # 虚拟IP
    virtual_ipaddress { 
        192.168.184.65/24
    } 
    track_script {
        check_nginx
    } 
}
  • 添加 nginx 健康检查脚本。
$ vim /etc/keepalived/check_nginx.sh
#!/bin/bash
#1、判断Nginx是否存活
counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
if [ $counter -eq 0 ]; then
    #2、如果不存活则尝试启动Nginx
    service nginx start
    sleep 2
    #3、等待2秒后再次获取一次Nginx状态
    counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
    #4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移
    if [ $counter -eq 0 ]; then
        service  keepalived stop
    fi
fi

$ chmod +x /etc/keepalived/check_nginx.sh
  • 启动服务
$ systemctl start nginx
$ systemctl start keepalived


# 在主节点查看是否生成vip
$ ip a | grep 184.
    inet 192.168.184.60/24 scope global eth0
    inet 192.168.184.65/24 scope global secondary eth0

  • 测试vip是否能正常漂移

关闭 master1 上的 nginx,vip 将漂移到 master2 上。

[root@master1 ~]# systemctl stop nginx.service
[root@master1 ~]# systemctl stop keepalived.service

[root@master2 ~]# ip a | grep 184.
    inet 192.168.184.61/24 scope global eth0
    inet 192.168.184.65/24 scope global secondary eth0
5、安装 k8s 控制节点

(1)设置容器运行时

每个节点上执行以下命令,设置容器运行时:

$ crictl config runtime-endpoint /run/containerd/containerd.sock

(2)k8s 集群初始化

  • master1 上生成 k8s 集群配置文件:
[root@master1 ~]# kubeadm config print init-defaults > kubeadm.yaml
  • 修改配置文件,需要修改镜像源地址、pod网段、控制节点访问等信息(修改更新的字段已加上注释)。
[root@master1 ~]# cat kubeadm.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
#localAPIEndpoint:
#  advertiseAddress: 1.2.3.4
#  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
#  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
# 修改镜像源为阿里云镜像源
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.25.0
# 添加控制节点访问入口
controlPlaneEndpoint: 192.168.184.65:16443
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
# 添加 pod 网段
  podSubnet: 10.28.0.0/16
scheduler: {}

---
# 添加以下内容,指定kube-proxy代理模式为ipvs,cgroup 驱动为systemd
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
  • 使用该配置文件进行集群初始化
[root@master1 ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemdVerification

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.184.65:16443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:37a681ca5201c9c65cdbeafb57cf7cd7e0e40d51d475cbdcfd38572cc089abbc \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.184.65:16443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:37a681ca5201c9c65cdbeafb57cf7cd7e0e40d51d475cbdcfd38572cc089abbc 
  • 配置 k8s 客户端的核心配置文件,用于存储集群访问凭证、上下文和用户认证信息。
$ mkdir -p $HOME/.kube
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config

配置好 kubectl 后即可使用 kubectl 命令
[root@master1 ~]# kubectl get no
NAME      STATUS     ROLES           AGE     VERSION
master1   NotReady   control-plane   7m56s   v1.25.0

6、添加 k8s 控制节点(扩容)

将现有节点(Master2 和 Master3)加入到 k8s 集群中,如果是要加入其他的新控制节点(扩容),需要像 Master 节点一样完成上文中的步骤,并加入到 nginx.conf  的 upstream 配置中。

  • 创建证书目录和 kubectl 使用的本地目录
[root@master2 ~]# mkdir -p /etc/kubernetes/pki/etcd && mkdir -p ~/.kube/
[root@master3 ~]# mkdir -p /etc/kubernetes/pki/etcd && mkdir -p ~/.kube/
  • 拷贝证书

关键证书文件复制到其他的两个主节点(master2、master3),这些证书是 k8s 集群安全通信和身份验证的核心文件,通常用于多主节点(HA)集群的搭建或节点恢复。

[root@master1 ~]# scp /etc/kubernetes/pki/ca.crt master2:/etc/kubernetes/pki/                                                                                                
[root@master1 ~]# scp /etc/kubernetes/pki/ca.key master2:/etc/kubernetes/pki/                                                                                          
[root@master1 ~]# scp /etc/kubernetes/pki/sa.key master2:/etc/kubernetes/pki/  
[root@master1 ~]# scp /etc/kubernetes/pki/sa.pub master2:/etc/kubernetes/pki/ 
[root@master1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt master2:/etc/kubernetes/pki/
[root@master1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.key master2:/etc/kubernetes/pki/   
[root@master1 ~]# scp /etc/kubernetes/pki/etcd/ca.key master2:/etc/kubernetes/pki/etcd/
[root@master1 ~]# scp /etc/kubernetes/pki/etcd/ca.crt master2:/etc/kubernetes/pki/etcd/


[root@master1 ~]# scp /etc/kubernetes/pki/ca.crt master3:/etc/kubernetes/pki/                                                      
[root@master1 ~]# scp /etc/kubernetes/pki/ca.key master3:/etc/kubernetes/pki/                                                                                          
[root@master1 ~]# scp /etc/kubernetes/pki/sa.key master3:/etc/kubernetes/pki/  
[root@master1 ~]# scp /etc/kubernetes/pki/sa.pub master3:/etc/kubernetes/pki/ 
[root@master1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt master3:/etc/kubernetes/pki/
[root@master1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.key master3:/etc/kubernetes/pki/   
[root@master1 ~]# scp /etc/kubernetes/pki/etcd/ca.key master3:/etc/kubernetes/pki/etcd/
[root@master1 ~]# scp /etc/kubernetes/pki/etcd/ca.crt master3:/etc/kubernetes/pki/etcd/
  • 在主节点中生成节点加入集群的命令以及对应的 token。
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.184.65:16443 --token 5h97sa.x0ld1vqezr8ftee3 --discovery-token-ca-cert-hash sha256:37a681ca5201c9c65cdbeafb57cf7cd7e0e40d51d475cbdcfd38572cc089abbc 
  • 将 master 2 和 master 3 添加到集群中

使用主节点的命令加入到集群即可,控制节点加入 k8s 集群时需要加上参数:--control-plane

$ kubeadm join 192.168.184.65:16443 --token 5h97sa.x0ld1vqezr8ftee3 --discovery-token-ca-cert-hash sha256:37a681ca5201c9c65cdbeafb57cf7cd7e0e40d51d475cbdcfd38572cc089abbc --control-plane --ignore-preflight-errors=SystemVerification

$ mkdir -p $HOME/.kube
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config


$ kubectl get no
NAME      STATUS     ROLES           AGE   VERSION
master1   NotReady   control-plane   68m   v1.25.0
master2   NotReady   control-plane   35m   v1.25.0
master3   NotReady   control-plane   87s   v1.25.0

7、添加 k8s 工作节点
  • 添加工作节点

直接执行 master1 生成的命令,加入到 k8s 集群中。

[root@node1 ~]# kubeadm join 192.168.184.65:16443 --token vr17n8.x67eeh9gtavzh4rw --discovery-token-ca-cert-hash sha256:37a681ca5201c9c65cdbeafb57cf7cd7e0e40d51d475cbdcfd38572cc089abbc

[root@node2 ~]# kubeadm join 192.168.184.65:16443 --token vr17n8.x67eeh9gtavzh4rw --discovery-token-ca-cert-hash sha256:37a681ca5201c9c65cdbeafb57cf7cd7e0e40d51d475cbdcfd38572cc089abbc
[root@master1 ~]# kubectl get no
NAME      STATUS     ROLES           AGE     VERSION
master1   NotReady   control-plane   78m     v1.25.0
master2   NotReady   control-plane   45m     v1.25.0
master3   NotReady   control-plane   11m     v1.25.0
node1     NotReady   <none>          4m33s   v1.25.0
node2     NotReady   <none>          4m3s    v1.25.0

8、安装网络插件

1.下载 calico 部署文件 calico.yaml:

$ wget https://docs.projectcalico.org/manifests/calico.yaml
$ cat calico.yaml | grep 'image:'
            image: docker.io/calico/cni:v3.25.0
            image: docker.io/calico/cni:v3.25.0
            image: docker.io/calico/node:v3.25.0
            image: docker.io/calico/node:v3.25.0
            image: docker.io/calico/kube-controllers:v3.25.0

2.更换镜像源(可选)

calico.yaml 中默认使用 docker.io 的镜像,由于网络限制,国内服务器可能无法拉取镜像,可以将 calico.yaml 指定的镜像url中删除掉  docker.io/ 前缀:

$ sed -i 's#docker.io/##g' calico.yaml
$ cat calico.yaml | grep 'image:'
              image: calico/cni:v3.25.0
              image: calico/cni:v3.25.0
              image: calico/node:v3.25.0
              image: calico/node:v3.25.0
              image: calico/kube-controllers:v3.25.0

 3.部署 calico 网络插件

calico 网络插件部署成功后,k8s 各个节点状态会变为 Ready,coredns 也会变为 running 状态。

$ kubectl apply -f calico.yaml

9、etcd 高可用集群

k8s 集群的 etcd 运行在3个 k8s 的控制节点上,查看各个节点的 etcd 配置文件如下:

[root@master1 ~]# cat /etc/kubernetes/manifests/etcd.yaml | grep cluster
    - --initial-cluster=master1=https://192.168.184.60:2380

[root@master2 ~]# cat /etc/kubernetes/manifests/etcd.yaml | grep cluster
    - --initial-cluster=master2=https://192.168.184.61:2380,master1=https://192.168.184.60:2380
    - --initial-cluster-state=existing

[root@master3 ~]# cat /etc/kubernetes/manifests/etcd.yaml | grep cluster
    - --initial-cluster=master2=https://192.168.184.61:2380,master3=https://192.168.184.62:2380,master1=https://192.168.184.60:2380
    - --initial-cluster-state=existing

统一修改 etcd 配置文件,将3个 etcd 的 url 都添加到参数 - --initial-cluster 中,如下:

$ cat /etc/kubernetes/manifests/etcd.yaml | grep cluster
    - --initial-cluster=master2=https://192.168.184.61:2380,master3=https://192.168.184.62:2380,master1=https://192.168.184.60:2380

$ systemctl restart kubelet.service

运行一个容器,用于检查 etcd 集群是否健康:

[root@master3 ~]# docker run --rm -it --net host   -v /etc/kubernetes:/etc/kubernetes   registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0   etcdctl -w table   --cert /etc/kubernetes/pki/etcd/peer.crt   --key /etc/kubernetes/pki/etcd/peer.key   --cacert /etc/kubernetes/pki/etcd/ca.crt   --endpoints=https://192.168.184.60:2379,https://192.168.184.61:2379,https://192.168.184.62:2379   endpoint status --cluster
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.184.61:2379 | 3d9b7ab6c9f68602 |   3.5.4 |  5.3 MB |     false |      false |         6 |      66260 |              66260 |        |
| https://192.168.184.62:2379 | 549737404b41a142 |   3.5.4 |  5.4 MB |      true |      false |         6 |      66260 |              66260 |        |
| https://192.168.184.60:2379 | dcd291359976977b |   3.5.4 |  5.4 MB |     false |      false |         6 |      66260 |              66260 |        |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
10、测试 k8s 网络是否正常

运行 busybox 容器测试网络是否正常,进入容器后执行以下测试命令:

# 测试DNS解析
nslookup kubernetes.default.svc.cluster.local

# 测试集群内服务访问
wget -O- <service-name>.<namespace>.svc.cluster.local

# 测试外网访问
ping www.baidu.com

[root@master3 ~]# kubectl run busybox --image=docker.io/library/busybox:1.28 \
> --image-pull-policy=IfNotPresent \
> --restart=Never \
> --rm -it -- sh
/ # nslookup kubernetes.default.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
/ # 
/ # ping www.baidu.com
PING www.baidu.com (183.240.99.169): 56 data bytes
64 bytes from 183.240.99.169: seq=0 ttl=51 time=7.833 ms
64 bytes from 183.240.99.169: seq=1 ttl=51 time=7.675 ms

/ # ping kubernetes.default.svc.cluster.local
PING kubernetes.default.svc.cluster.local (10.96.0.1): 56 data bytes
64 bytes from 10.96.0.1: seq=0 ttl=64 time=0.112 ms


命令解析:

--image-pull-policy=IfNotPresent:优先使用本地已存在的镜像
--restart=Never:创建一次性Pod
--rm -it:退出后自动删除Pod并保持交互模式