【k8s】(七) kubernetes1.29.4离线部署之-部署网络插件

发布于:2024-04-26 ⋅ 阅读:(36) ⋅ 点赞:(0)

(一)kubernetes1.29.4离线部署之-安装文件准备
(二)kubernetes1.29.4离线部署之-镜像文件准备
(三)kubernetes1.29.4离线部署之-环境初始化
(四)kubernetes1.29.4离线部署之-组件安装
(五)kubernetes1.29.4离线部署之-初始化第一个控制平面
(六)kubernetes1.29.4离线部署之-加入Node节点
(七)kubernetes1.29.4离线部署之-网络插件
(八)kubernetes1.29.4离线部署之-测试验证

部署网络插件

我们采用calico作为网络插件,calico最新版建议的部署方式为两个步骤,执行两个文件即可:
tigera-operator.yaml、custom-resources.yaml

下载tigera-operator.yaml

https://github.com/projectcalico/calico/blob/v3.27.3/manifests/tigera-operator.yaml
https://github.com/projectcalico/calico/blob/v3.27.3/manifests/custom-resources.yaml

修改tigera-operator.yaml文件内容(离线版)

注意:修改镜像地址,请根据自己保存镜像的实际地址修改

[root@web02 v1.29.4]# cat tigera-operator.yaml | grep image:
                    image:
          image: quay.io/tigera/operator:v1.32.7
[root@web02 v1.29.4]# 
[root@web02 v1.29.4]# sudo sed -i "s#quay.io\/tigera#qinghub.net:5000\/qingcloudtech#g" tigera-operator.yaml
[root@web02 v1.29.4]# cat tigera-operator.yaml | grep image:
                    image:
          image: qinghub.net:5000/qingcloudtech/operator:v1.32.7
[root@web02 v1.29.4]# ll

执行kubectl create -f tigera-operator.yaml
[root@itserver-master2 kube]# kubectl create  -f tigera-operator.yaml 
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
[root@itserver-master2 kube]# 

查看结果
[root@itserver-master2 kube]# kubectl get pods -n tigera-operator
NAME                               READY   STATUS    RESTARTS   AGE
tigera-operator-6779dc6889-zd4zt   1/1     Running   0          55s
[root@itserver-master2 kube]# 

修改custom-resources.yaml

注意: 主要修改内容:cidr: 172.16.0.0/16,需要与控制平面初始化时的地址填写的地址一直

spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 172.16.0.0/12
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

执行kubectl create -f custom-resources.yaml
[root@itserver-master2 kube]# kubectl create  -f custom-resources.yaml 
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
[root@itserver-master2 kube]# kubectl get ns
NAME              STATUS   AGE
calico-system     Active   48s
default           Active   3h5m
kube-node-lease   Active   3h5m
kube-public       Active   3h5m
kube-system       Active   3h5m
tigera-operator   Active   6m35s
[root@itserver-master2 kube]# kubectl get pods -n calico-system
NAME                                       READY   STATUS                  RESTARTS   AGE
calico-kube-controllers-68bf945ffc-mf7t2   0/1     ContainerCreating       0          75s
calico-node-27fgm                          0/1     Init:ImagePullBackOff   0          75s
calico-typha-5886b45b65-pmsm7              0/1     ErrImagePull            0          75s
csi-node-driver-9b29j                      0/2     ContainerCreating       0          75s
[root@itserver-master2 kube]# 

calico网络安装后,检查所有空间众的pod:
[root@itserver-master2 certs.d]# kubectl get pods --all-namespaces
NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-864697c659-2sdhd          1/1     Running   0          4m18s
calico-apiserver   calico-apiserver-864697c659-c2vp9          1/1     Running   0          4m18s
calico-system      calico-kube-controllers-68bf945ffc-dvrlf   1/1     Running   0          63m
calico-system      calico-node-27fgm                          1/1     Running   0          18h
calico-system      calico-node-zwpls                          1/1     Running   0          17h
calico-system      calico-typha-5886b45b65-pmsm7              1/1     Running   0          18h
calico-system      csi-node-driver-9b29j                      2/2     Running   0          18h
calico-system      csi-node-driver-mrtq5                      2/2     Running   0          17h
kube-system        coredns-67bd986d4c-67fvl                   1/1     Running   0          16m
kube-system        coredns-67bd986d4c-x7vk7                   1/1     Running   0          56m
kube-system        etcd-itserver-master2                      1/1     Running   1          21h
kube-system        kube-apiserver-itserver-master2            1/1     Running   1          21h
kube-system        kube-controller-manager-itserver-master2   1/1     Running   1          21h
kube-system        kube-proxy-9rv85                           1/1     Running   0          21h
kube-system        kube-proxy-l9rht                           1/1     Running   1          17h
kube-system        kube-scheduler-itserver-master2            1/1     Running   1          21h
tigera-operator    tigera-operator-6779dc6889-zd4zt           1/1     Running   0          18h
[root@itserver-master2 certs.d]# 

看到如上几个空间中的状态都变为runging时,网络部署成功


你可以通过【QingHub Studio】) 套件直接安装部署,也可以手动按如下文档操作,该项目已经全面开源,完整的脚本可以从如下开源地址获取:
开源地址: https://gitee.com/qingplus/qingcloud-platform
【QingHub Studio集成开发套件】