2-2 使用kubeasz升级k8s集群

发布于:2023-02-03 ⋅ 阅读:(877) ⋅ 点赞:(0)

前言

在上一章中,我们使用kubeasz部署k8s集群。当集群已投入生产,需要更新k8s或者运行时,将采取以下步骤。

k8s与containerd的升级


升级master节点k8s

1)部署节点:先在github/kubernetes下载包括服务端客户端的4个二进制文件,全部解压缩后会产生一个k8s目录:

kubernetes.tar.gz
kubernetes-client-linux-amd64.tar.gz  
kubernetes-node-linux-amd64.tar.gz  
kubernetes-server-linux-amd64.tar.gz  

ls kubernetes/
addons  client  cluster  docs  hack  kubernetes-src.tar.gz  LICENSES  node  README.md  server  version

2)全部node节点:在工作节点上注释掉对即将升级master的api-server请求:

vim /etc/kube-lb/conf/kube-lb.conf

stream {
    upstream backend {
        # server 192.168.100.161:6443    max_fails=2 fail_timeout=3s;
        server 192.168.100.162:6443    max_fails=2 fail_timeout=3s;
    }
    
systemctl restart kube-lb.service

3)master节点:来到需升级k8s的master节点,停止5个服务:

systemctl stop kube-apiserver.service kube-controller-manager.service kube-proxy.service kube-scheduler.service kubelet.service

4)部署节点:把下载好的新版本bin文件拷贝到maser节点:

cd /root/kubernetes/server/bin
scp kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet kubectl root@192.168.100.161:/usr/local/bin

5)master节点:修改或者保留k8s配置,然后启动5个服务:

systemctl start kube-apiserver.service kube-controller-manager.service kube-proxy.service kube-scheduler.service kubelet.service

6)全部node节点:恢复对升级完成master的api-server请求:

vim /etc/kube-lb/conf/kube-lb.conf
stream {
    upstream backend {
        server 192.168.100.161:6443    max_fails=2 fail_timeout=3s;
        server 192.168.100.162:6443    max_fails=2 fail_timeout=3s;
    }
    
systemctl restart kube-lb.service

7)部署节点:查看升级状态,发现master1已经变成v1.24.3:

kubectl get node
NAME              STATUS                     ROLES    AGE   VERSION
192.168.100.161   Ready,SchedulingDisabled   master   16h   v1.24.3
192.168.100.162   Ready,SchedulingDisabled   master   16h   v1.24.2

升级node节点k8s

1)部署节点:驱逐node节点上的pod到其它节点:

kubectl drain 192.168.100.171 --ignore-daemonsets --force

2)node节点:停止服务:

systemctl stop kube-proxy.service kubelet.service

3)部署节点:拷贝二进制文件到node节点:

cd /root/kubernetes/server/bin
scp kube-proxy kubelet kubectl root@192.168.100.171:/usr/local/bin

4)node节点:恢复服务:

systemctl start kube-proxy.service kubelet.service

5)部署节点:恢复node为可调度,最后查看升级情况:

kubectl uncordon 192.168.100.171

kubectl get node
192.168.100.171   Ready                      node     22h     v1.24.3

升级两节点containerd

1)部署节点:查看containerd版本为v1.6.4:

kubectl get node -o wide
NAME              STATUS                     ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
192.168.100.171   Ready                      node     23h   v1.24.2   192.168.100.154   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.4

2)部署节点:需升级containerd,runc,crictl,先到github下载二进制文件:

wget https://github.com/containerd/containerd/releases/download/v1.6.6/containerd-1.6.6-linux-amd64.tar.gz
wget https://github.com/opencontainers/runc/releases/download/v1.1.3/runc.amd64
VERSION="v1.24.1"
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz

# 解压重命名后,得到以下文件:
ls bin/
containerd  containerd-shim  containerd-shim-runc-v1  containerd-shim-runc-v2  containerd-stress  crictl  ctr  runc

3)部署节点:驱逐node节点上的pod到其它节点:

kubectl drain 192.168.100.171 --ignore-daemonsets --force

4)node节点:停止服务,kill进程,甚至需要关闭自启动后重启:

systemctl disable containerd kube-proxy kubelet  
reboot

5)部署节点:拷贝第二步下载好的二进制文件:

scp containerd  containerd-shim  containerd-shim-runc-v1  containerd-shim-runc-v2  containerd-stress  crictl  ctr  runc root@192.168.100.171:/usr/local/bin

6)node节点:自启动服务,同时启动服务:

systemctl enable --now containerd kube-proxy kubelet  

7)部署节点:恢复node为可调度,查看升级状态:

kubectl uncordon 192.168.100.171

kubectl get node -o wide
... CONTAINER-RUNTIME
... containerd://1.6.6



网站公告

今日签到

点亮在社区的每一天
去签到