k8s学习

发布于:2025-08-06 ⋅ 阅读:(20) ⋅ 点赞:(0)

文章目录


k8s学习
k8s集群版本:1.23.6。一个master,两个node.
centos7.9中部署

1 核心概念

官方文档:https://kubernetes.io/zh-cn/docs/reference/kubectl/

1. k8s组件

k8s控制面板组件:master
api-server:接口服务,基于restful风格开放k8s接口的服务

kube-controller-manager:控制器管理器,管理各个类型的控制器,真对k8s中的各种资源进行管理

cloud-controller-manager:云控制器管理器:第三方平台提供的控制器api对接管理功能。

kube-scheduler:调度器 负责将pod基于一定算法,将其调用到更合适的节点(服务器)上。

etcd:理解为k8s的数据库。简直类型存储的分布式数据库,提供了基于raft算法实现自助的集群高可用。(持久化存储)

k8s普通节点node:

kubelet:负责pod的生命周期、存储、网络

kube-proxy:网络代理,负责service的服务发现负载均衡(四层负载)

container-runtime:容器运行时环境-docker/containerd/cri-o

在这里插入图片描述

附加组件:

kube-dns:负责为整个集群提供dns服务

Ingrees-controller:外部网络发现(外部访问k8s集群)

prometheus:提供监控

dashboard:控制台

federation/es

Kubernetes 集群
└── Namespace(如 default,kube-system)
├── Deployment
│ └── ReplicaSet(中间层)
│ └── Pod(最终实际运行的资源)
└── Pod(如果直接用 yaml 定义一个裸 Pod)

2.有状态 无状态

无状态: 不会对本地环境产生任何依赖,例如不会存储数据到本地磁盘。比如人nginx的反向代理(无法存储数据)

有状态:会对本地环境产生依赖,例如需要将存储数据到本地磁盘(对于集群需要复制过去)
在这里插入图片描述

3.资源和对象

资源:类 对象:基于类创建的实体化对象

对象规约和状态:

规约–》规格。描述对象的期望状态spec

状态–》表示对象的实际状态。该属性由k8s维护,让对象尽可能的让实际状态与期望状态相符合

资源的分类:元数据型空间、集群级、命名空间

集群级别的资源:作用于集群之上,集群之下的所有资源都可以共享使用

命名空间级别的资源:作用在命名空间之上,通常只能在该命名空间范围内使用。

元数据资源:对于资源的元数据描述,每一个资源都可以使用元空间数据(公开) HPA:自动扩容、缩容

在这里插入图片描述

4.pod简单介绍

pod:一个pod中包含多个容器(可以理解成容器组,里面的容器强耦合)。希望实现两个容器有强依赖关系时,可以共享网络/容器之间文件系统之间的共享。
创建一个pod:底层会有一个pause,然后pod中的新建的容器,会和pause交互和共享。一般情况下,一个pod一个容器。
在这里插入图片描述

5.控制器

1.适用无状态服务

管理pod。
replicationcontroller:帮组我们动态更新pod的副本数.(扩容、缩容)

replicaset:帮组我们动态更新pod,通过selector来选择对哪些pod生效。给每个pod打个标签。(扩容、缩容)

deployment:工作中主要使用。真对rs更高层次的封装,提供了更丰富的部署相关的功能。创建replica set/pod、滚动升级/回滚、暂定与恢复

2.适用于有状态服务

stateful:更新有顺序、数据需要存储。组成:headless service 、volumeClaim Template。headless service:对于有状态服务的dns管理。

volumeClaim template:用于创建持久化卷的模板

StatefulSet 中每个 Pod 的 DNS 格式为 statefulSetName-{0…N-1}.serviceName.namespace.svc.cluster.local

  • serviceName 为 Headless Service 的名字
  • 0…N-1 为 Pod 所在的序号,从 0 开始到 N-1
  • statefulSetName 为 StatefulSet 的名字
  • namespace 为服务所在的 namespace,Headless Servic 和 StatefulSet 必须在相同的 namespace
  • .cluster.local 为 Cluster Domain
    在这里插入图片描述

3.守护进程

守护进程:daemonset

DaemonSet 保证在每个 Node 上都运行一个容器副本,常用来部署一些集群的日志、监控或者其他系统管理应用。典型的应用包括:

  • 日志收集,比如 fluentd,logstash 等
  • 系统监控,比如 Prometheus Node Exporter,collectd,New Relic agent,Ganglia gmond 等
  • 系统程序,比如 kube-proxy, kube-dns, glusterd, ceph 等
    在这里插入图片描述

4 任务/定时任务

job:一次性任务,运行完后pod销毁,不在重新启动新容器

crontab:周期性执行,定时任务。

6.服务发现

在这里插入图片描述

1.service和ingress交互逻辑

在这里插入图片描述

7.存储卷

volume/csi

8.特殊类存储

configmap secret downwardapi

9.role/rolebinding

2 简单命令

kubectl get pods/nodes
相关的缩写:svc:services  deploy:deployment namespace:ns nodes:no pods:po

举例:对于使用deployment创建的Nginx
 kubectl create deployment nginx --image=nginx
 kubectl expose deployment nginx --port=80 --type=NodePort  暴露的端口
依靠deployment创建一个Pod为nginx。
使用Kubectl get deploy 就可以查看到
[root@k8s-master k8s]# kubectl get services  #同时也会创建一个命名为nginx的service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        124m
nginx        NodePort    10.108.214.62   <none>        80:32717/TCP   75m
[root@k8s-master k8s]# kubectl scale deploy --replicas=3 nginx  #创建三个nginx
deployment.apps/nginx scaled
[root@k8s-master k8s]# kubectl get deploy
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   3/3     3            3           26m
[root@k8s-master k8s]# kubectl get po -o wide  查看pod的详细信息

当如果删除pod时,就直接删除deploy中的nginx.同时也要删除nginx的svc
kubectl delete deploy nginx
kubectl delete svc nginx

3 配置文件-资源清单(pod简单使用)

apiVersion: v1
kind: Pod
metadata: # 定义 资源的元数据(如 Pod、Deployment、Service 等),而 metadata.name 则是用来指定该资源的名称。
  name: multi-container-pod
spec:
  containers:
    - name: container-1  # 第一个容器   #表示容器的开始,与container属于父与子的关系.
      image: nginx
      ports:
        - containerPort: 80
    - name: container-2  # 第二个容器
      image: redis
      ports:
        - containerPort: 6379
    - name: container-3  # 第三个容器
      image: busybox
      command: ["sh", "-c", "while true; do echo Hello; sleep 5; done"]

例子:

nginx-daemon为例

[root@k8s-master pods]# pwd
/opt/k8s/pods
[root@k8s-master pods]# cat nginx-demo.yaml 
apiVersion: v1 #api文档版本
kind: Pod #资源对象类型,可以是deployment stateful
metadata: # pod相关的元数据
  name: nginx-daemon-pod # pod的名称
  labels: #定义pod的标签
    type: app #自定义 label,名字为type,值为app
    versiontest: 1.0.0 #自定义的pod版本号
  namespace: 'default' # 命名空间的配置
spec: #期待pod按照这里面的描述进行
  containers: #对于Pod中的容器描述
  - name: nginx #容器的名称
    image: nginx:1.7.9 #指定容器的镜像
    imagePullPolicy: IfNotPresent #镜像拉去策略,本地没有就拉去远程的
    command: # 指定启动时执行的命令
    - nginx
    - -g
    - 'daemon off;' # nginx -g 'daemon off;'
    workingDir: /usr/share/nginx/html #定义容器启动后的工作目录
    ports: 
    - name: http #端口名称
      containerPort: 80 #容器内需要暴露的端口
      protocol: TCP #描述端口是基于哪种协议通信的
    env: #环境变量
    - name: JVM_OPTS # 环境变量的名称
      value: '-Xms128m -Xmx128m' #环境变量的值
    resources: 
      requests: #最少需要多少资源
        cpu: 100m # 限制cpu最少使用0.1个核心
        memory: 128Mi #限制内存最少使用128M
      limits: #最多可以
        cpu: 200m #限制最多可以使用0.2个核心
        memory: 256Mi #限制最多可以使用256M内存
  restartPolicy: OnFailure #重启策略,只有失败才重启
    
[root@k8s-master pods]#kubectl describe po nginx-daemon-pod #查看pod的详情 以下是输出
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  63s   default-scheduler  Successfully assigned default/nginx-daemon-pod to k8s-node2
  Normal  Pulling    61s   kubelet            Pulling image "nginx:1.7.9"
  Normal  Pulled     30s   kubelet            Successfully pulled image "nginx:1.7.9" in 30.880086733s
  Normal  Created    30s   kubelet            Created container nginx
  Normal  Started    30s   kubelet            Started container nginx
[root@k8s-master pods]# kubectl get pod
NAME               READY   STATUS    RESTARTS   AGE
nginx-daemon-pod   1/1     Running   0          86s
[root@k8s-master pods]# kubectl get pod -o wide
NAME               READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
nginx-daemon-pod   1/1     Running   0          6m24s   10.244.169.130   k8s-node2   <none>           <none>


4.探针

为什么需要:容器运行后会启动一个探针,探查容器是否有问题,有问题的话,就查看重启策略,是否重启。

StartupProbe(启动探针):用于判断容器是否已经启动了。当配置了 startupProbe 后,会先禁用其他探针,直到 startupProbe 成功后,其他探针才会继续。用于确定是否启动完成,让其他探针来

LivenessProbe(存活探针):用于探测容器中的应用是否运行,如果探测失败,kubelet 会根据配置的重启策略进行重启,若没有配置,默认就认为容器启动成功,不会执行重启策略。应用启动后,如果失败了就重启

ReadinessProbe(就绪探针):用于探测容器内的程序是否健康,它的返回值如果返回 success,那么就认为该容器已经完全启动,并且该容器是可以接收外部流量的。应用启动后,如果成功了就让外部访问.检测容器是否准备好接收流量

探测方式:

Exec Action:在内部执行一个命令(cat / ps),如果返回值为0,则任务容器是健康的

TCP Socket Action:通过检测端口

HTTP Get Action:通过状态码的返回来确定。 200-400内为健康

initialDelaySeconds: 60 # 初始化时间 过了这个时间才开始探针探测
timeoutSeconds: 2 # 超时时间 超过多少秒就算失败
periodSeconds: 5 # 监测间隔时间
successThreshold: 1 # 检查 1 次成功就表示成功
failureThreshold: 2 # 监测失败 2 次就表示失败

举例:StartupProbe

[root@k8s-master pods]# cat nginx-po.yaml 
apiVersion: v1 #api文档版本
kind: Pod #资源对象类型,可以是deployment stateful
metadata: # pod相关的元数据
  name: nginx-po # pod的名称
  labels: #定义pod的标签
    type: app #自定义 label,名字为type,值为app
    versiontest: 1.0.0 #自定义的pod版本号
  namespace: 'default' # 命名空间的配置
spec: #期待pod按照这里面的描述进行
  containers: #对于Pod中的容器描述
  - name: nginx #容器的名称
    image: nginx:1.7.9 #指定容器的镜像
    imagePullPolicy: IfNotPresent #镜像拉去策略,本地没有就拉去远程的
    startupProbe: #应用启动探针
      httpGet: #探测方式 基于http请求探测
        path: /path/api #http请求路径
        port: 80 #请求端口路径
      failureThreshold: 3 #失败多少次才算真正失败
      periodSeconds: 10 #间隔检测时间
      successThreshold: 1 #多少次检测成功算是成功
      timeoutSeconds: 5 # 请求的超时时间
    command: # 指定启动时执行的命令
    - nginx
    - -g
    - 'daemon off;' # nginx -g 'daemon off;'
    workingDir: /usr/share/nginx/html #定义容器启动后的工作目录
    ports: 
    - name: http #端口名称
      containerPort: 80 #容器内需要暴露的端口
      protocol: TCP #描述端口是基于哪种协议通信的
    env: #环境变量
    - name: JVM_OPTS # 环境变量的名称
      value: '-Xms128m -Xmx128m' #环境变量的值
    resources: 
      requests: #最少需要多少资源
        cpu: 100m # 限制cpu最少使用0.1个核心
        memory: 128Mi #限制内存最少使用128M
      limits: #最多可以
        cpu: 200m #限制最多可以使用0.2个核心
        memory: 256Mi #限制最多可以使用256M内存
  restartPolicy: OnFailure #重启策略,只有失败才重启


启动会出错:
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  2m17s                default-scheduler  Successfully assigned default/nginx-daemon-pod to k8s-node2
  Normal   Pulled     47s (x4 over 2m17s)  kubelet            Container image "nginx:1.7.9" already present on machine
  Normal   Created    47s (x4 over 2m16s)  kubelet            Created container nginx
  Normal   Started    47s (x4 over 2m16s)  kubelet            Started container nginx
  Normal   Killing    47s (x3 over 107s)   kubelet            Container nginx failed startup probe, will be restarted
  Warning  Unhealthy  37s (x10 over 2m7s)  kubelet            Startup probe failed: HTTP probe failed with statuscode: 404


如果将 path: /path/api 改为 path: /index.html,将会成功。
    Startup:   http-get http://:80/index.html delay=0s timeout=5s period=10s #success=1 #failure=3
    Environment:
      JVM_OPTS:  -Xms128m -Xmx128m
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cvgpk (ro)



如果将启动方式改成socket:
    imagePullPolicy: IfNotPresent #镜像拉去策略,本地没有就拉去远程的
    startupProbe: #应用启动探针
    #  httpGet: #探测方式 基于http请求探测
    #    path: /index.html #http请求路径
      tcpSocket: 
        port: 80 #请求端口路径
      failureThreshold: 3 #失败多少次才算真正失败
那么启动成功后: kubectl describe pod nginx-po
    Startup:   tcp-socket :80 delay=0s timeout=5s period=10s #success=1 #failure=3
    Environment:
      JVM_OPTS:  -Xms128m -Xmx128m
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2fl7m (ro)


如果将改为命令:
    imagePullPolicy: IfNotPresent #镜像拉去策略,本地没有就拉去远程的
    startupProbe: #应用启动探针
    #  httpGet: #探测方式 基于http请求探测
    #    path: /index.html #http请求路径
    #  tcpSocket: 
    #    port: 80 #请求端口路径
      exec:
        command:
        - sh
        - -c
        - "echo 'success' > /inited;"
      failureThreshold: 3 #失败多少次才算真正失败
启动结果:
    Requests:
      cpu:     100m
      memory:  128Mi
    Startup:   exec [sh -c echo 'success' > /inited;] delay=0s timeout=5s period=10s #success=1 #failure=3

[root@k8s-master pods]# kubectl exec -it  nginx-po -- cat /inited
success
[root@k8s-master pods]# 

举例:LivenessProbe

livennessprobe检测容器启动后,不可用就重启

root@k8s-master pods]# cat nginx-liveness-po.yaml 
apiVersion: v1 #api文档版本
kind: Pod #资源对象类型,可以是deployment stateful
metadata: # pod相关的元数据
  name: nginx-po # pod的名称
  labels: #定义pod的标签
    type: app #自定义 label,名字为type,值为app
    versiontest: 1.0.0 #自定义的pod版本号
  namespace: 'default' # 命名空间的配置
spec: #期待pod按照这里面的描述进行
  containers: #对于Pod中的容器描述
  - name: nginx #容器的名称
    image: nginx:1.7.9 #指定容器的镜像
    imagePullPolicy: IfNotPresent #镜像拉去策略,本地没有就拉去远程的
    startupProbe: #应用启动探针
    #  httpGet: #探测方式 基于http请求探测
    #    path: /index.html #http请求路径
    #  tcpSocket: 
    #    port: 80 #请求端口路径
      exec:
        command:
        - sh 
        - -c 
        - "sleep 3;echo 'success' > /inited;" 
      failureThreshold: 3 #失败多少次才算真正失败
      periodSeconds: 10 #间隔检测时间
      successThreshold: 1 #多少次检测成功算是成功
      timeoutSeconds: 5 # 请求的超时时间
    livenessProbe: #应用启动探针
      httpGet: #探测方式 基于http请求探测
        path: /started.html #http请求路径
    #  tcpSocket: 
        port: 80 #请求端口路径
      failureThreshold: 3 #失败多少次才算真正失败
      periodSeconds: 10 #间隔检测时间
      successThreshold: 1 #多少次检测成功算是成功
      timeoutSeconds: 5 # 请求的超时时间

    command: # 指定启动时执行的命令
    - nginx
    - -g
    - 'daemon off;' # nginx -g 'daemon off;'
    workingDir: /usr/share/nginx/html #定义容器启动后的工作目录
    ports: 
    - name: http #端口名称
      containerPort: 80 #容器内需要暴露的端口
      protocol: TCP #描述端口是基于哪种协议通信的
    env: #环境变量
    - name: JVM_OPTS # 环境变量的名称
      value: '-Xms128m -Xmx128m' #环境变量的值
    resources: 
      requests: #最少需要多少资源
        cpu: 100m # 限制cpu最少使用0.1个核心
        memory: 128Mi #限制内存最少使用128M
      limits: #最多可以
        cpu: 200m #限制最多可以使用0.2个核心
        memory: 256Mi #限制最多可以使用256M内存
  restartPolicy: OnFailure #重启策略,只有失败才重启


在探针livenesspro探针配置中:每隔10s检测一次,如果异常了三次,就表示容器失败,每次请求为5s,不再重启
当未将/started.html放入到容器时,就会出现错误:
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  3m27s                default-scheduler  Successfully assigned default/nginx-po to k8s-node2
  Normal   Pulled     87s (x4 over 3m26s)  kubelet            Container image "nginx:1.7.9" already present on machine
  Normal   Created    87s (x4 over 3m26s)  kubelet            Created container nginx
  Normal   Started    87s (x4 over 3m26s)  kubelet            Started container nginx
  Normal   Killing    87s (x3 over 2m47s)  kubelet            Container nginx failed liveness probe, will be restarted
  Warning  Unhealthy  67s (x10 over 3m7s)  kubelet            Liveness probe failed: HTTP probe failed with statuscode: 404
[root@k8s-master pods]# kubectl cp started.html nginx-po:/usr/share/nginx/html/  #添加started.html
放成功后:
[root@k8s-master pods]# kubectl get pod
NAME       READY   STATUS    RESTARTS      AGE
nginx-po   1/1     Running   3 (50s ago)   2m50s

举例ReadinessProbe

容器启动成功后才能被外网连接

    startupProbe: #应用启动探针
    #  httpGet: #探测方式 基于http请求探测
    #    path: /index.html #http请求路径
    #  tcpSocket: 
    #    port: 80 #请求端口路径
      exec:
        command:
        - sh 
        - -c 
        - "sleep 3;echo 'success' > /inited;" 
      failureThreshold: 3 #失败多少次才算真正失败
      periodSeconds: 10 #间隔检测时间
      successThreshold: 1 #多少次检测成功算是成功
      timeoutSeconds: 5 # 请求的超时时间
    livenessProbe: #应用启动探针
      httpGet: #探测方式 基于http请求探测
        path: /started.html #http请求路径
    #  tcpSocket: 
        port: 80 #请求端口路径
      failureThreshold: 3 #失败多少次才算真正失败
      periodSeconds: 10 #间隔检测时间
      successThreshold: 1 #多少次检测成功算是成功
      timeoutSeconds: 5 # 请求的超时时间
当使用这个时,为了简便,没有配置外网访问,

如果started.html没有时,会出现报错:
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  2m34s                 default-scheduler  Successfully assigned default/nginx-po to k8s-node1
  Normal   Pulled     2m33s                 kubelet            Container image "nginx:1.7.9" already present on machine
  Normal   Created    2m33s                 kubelet            Created container nginx
  Normal   Started    2m33s                 kubelet            Started container nginx
  Warning  Unhealthy  14s (x15 over 2m21s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 404

[root@k8s-master pods]# kubectl get po -o wide
NAME       READY   STATUS    RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
nginx-po   0/1     Running   0          115s   10.244.36.72   k8s-node1   <none>           <none>
[root@k8s-master pods]# kubectl cp started.html nginx-po:/usr/share/nginx/html/
[root@k8s-master pods]# kubectl get po -o wide
NAME       READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
nginx-po   1/1     Running   0          2m58s   10.244.36.72   k8s-node1   <none>           <none>

5.pod生命周期

在这里插入图片描述

preStop stop:用于销毁内部数据

preStart: 生命周期启动阶段做的事情,不一定是在容器的command之前运行

pod退出流程:endpoint删除pod的ip地址。pod变成terminating状态。执行prestop的指令

举例:

apiVersion: v1 #api文档版本
kind: Pod #资源对象类型,可以是deployment stateful
metadata: # pod相关的元数据
  name: nginx-po # pod的名称
  labels: #定义pod的标签
    type: app #自定义 label,名字为type,值为app
    versiontest: 1.0.0 #自定义的pod版本号
  namespace: 'default' # 命名空间的配置
spec: #期待pod按照这里面的描述进行
  terminationGracePeriodSeconds: 40 #当pod被删除时,给pod多长时间作为清理数据
  containers: #对于Pod中的容器描述
  - name: nginx #容器的名称
    image: nginx:1.7.9 #指定容器的镜像
    imagePullPolicy: IfNotPresent #镜像拉去策略,本地没有就拉去远程的
    lifecycle: #配置生命周期
      postStart: #生命周期启动阶段做的事情,不一定市在容器的command之前运行
        exec: 
          command: 
          - sh
          - -c
          - "echo '<h1>pre stop</h1>' > /usr/share/nginx/html/prestop.html"
      preStop:
        exec: 
          command:
          - sh
          - -c 
          - "sleep 50; echo 'sleep finished...' >> /usr/share/nginx/html/prestop.html"
    command:
    - nginx
    - -g
    - 'daemon off;' # nginx -g 'daemon off;'
    workingDir: /usr/share/nginx/html #定义容器启动后的工作目录
    ports: 
    - name: http #端口名称
      containerPort: 80 #容器内需要暴露的端口
      protocol: TCP #描述端口是基于哪种协议通信的
    env: #环境变量
    - name: JVM_OPTS # 环境变量的名称
      value: '-Xms128m -Xmx128m' #环境变量的值
    resources: 
      requests: #最少需要多少资源
        cpu: 100m # 限制cpu最少使用0.1个核心
        memory: 128Mi #限制内存最少使用128M
      limits: #最多可以
        cpu: 200m #限制最多可以使用0.2个核心
        memory: 256Mi #限制最多可以使用256M内存
  restartPolicy: OnFailure #重启策略,只有失败才重启
    

在这个里面:prestop表示的是容器收到delete指令后执行的命令
需要注意时间,因为默认情况下terminationGracePeriodSeconds为30s,也就是说只给了30s的时间让prestop执行相关的命令。假设如果超过30s,那么在prestop中还没有执行的命令将不再执行。


^C[root@k8s-master pods]# kubectl get pod -w
NAME       READY   STATUS    RESTARTS   AGE
nginx-po   1/1     Running   0          9s
nginx-po   1/1     Terminating   0          25s
nginx-po   1/1     Terminating   0          65s
nginx-po   0/1     Terminating   0          65s
nginx-po   0/1     Terminating   0          65s
nginx-po   0/1     Terminating   0          65s

但是实际的删除时间为: 40s
[root@k8s-master pods]# time kubectl delete po nginx-po
pod "nginx-po" deleted

real	0m40.633s
user	0m0.029s
sys	0m0.025s

6.label和selector

label

label:在各类资源的 metadata.labels 中进行配置

也可以通过Kubectl命令来添加

kubectl label po <资源名称> app=hello 添加标签

kubectl label po <资源名称> app=hello2 --overwrite 修改标签

# selector 按照 label 单值查找节点
kubectl get po -A -l app=hello

# 查看所有节点的 labels
kubectl get po --show-labels

[root@k8s-master pods]# kubectl get po --show-labels  #临时加Label
NAME       READY   STATUS    RESTARTS   AGE    LABELS
nginx-po   1/1     Running   0          5m7s   author=jiaxing,type=app,versiontest=1.0.0
[root@k8s-master pods]# kubectl label po nginx-po author=jiaxing123 --overwrite
pod/nginx-po labeled
[root@k8s-master pods]# kubectl label po show
error: at least one label update is required
[root@k8s-master pods]# ^C
[root@k8s-master pods]# kubectl get po --show-labels
NAME       READY   STATUS    RESTARTS   AGE     LABELS
nginx-po   1/1     Running   0          6m23s   author=jiaxing123,type=app,versiontest=1.0.0
[root@k8s-master pods]# 

selector:

[root@k8s-master pods]# kubectl get po --show-labels 
NAME       READY   STATUS    RESTARTS   AGE     LABELS
nginx-po   1/1     Running   0          6m23s   author=jiaxing123,type=app,versiontest=1.0.0
[root@k8s-master pods]# kubectl get po -l app=java
No resources found in default namespace.
[root@k8s-master pods]# kubectl get po -l type=app  #单个值匹配
NAME       READY   STATUS    RESTARTS   AGE
nginx-po   1/1     Running   0          8m27s
[root@k8s-master pods]# kubectl get po -l type=app1
No resources found in default namespace.
[root@k8s-master pods]# 


多条件查询:
[root@k8s-master pods]# kubectl get po  -A -l  type=app --show-labels
NAMESPACE   NAME       READY   STATUS    RESTARTS   AGE   LABELS
default     nginx-po   1/1     Running   0          13m   author=jiaxing123,type=app,versiontest=1.0.0
[root@k8s-master pods]# kubectl get po -l 'versiontest in (1.0.0, 1.0.1, 1.0.3)'
NAME       READY   STATUS    RESTARTS   AGE
nginx-po   1/1     Running   0          16m
[root@k8s-master pods]# 
[root@k8s-master pods]# 
[root@k8s-master pods]# kubectl get po -l versiontest!=1.2.0,type=app (两个条件是与的关系)
NAME       READY   STATUS    RESTARTS   AGE
nginx-po   1/1     Running   0          17m

7.deployment

1.层级关系

deploy->replicaset->pod

通过deployment来创建pod,可以增加或用于rs的副本数

[root@k8s-master deployments]# kubectl create deploy nginx-deploy --image=nginx:1.7.9
deployment.apps/nginx-deploy created
[root@k8s-master deployments]# kubectl get deploy
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   1/1     1            1           16s
[root@k8s-master deployments]# kubectl get replicaset
NAME                      DESIRED   CURRENT   READY   AGE
nginx-deploy-78d8bf4fd7   1         1         1       103s
[root@k8s-master deployments]# kubectl get pod
NAME                            READY   STATUS    RESTARTS   AGE
nginx-deploy-78d8bf4fd7-h5f4z   1/1     Running   0          2m5s

spec(Specification - 规格定义):定义该资源对象的 期望状态(Desired State),即告诉 Kubernetes “你希望这个资源最终变成什么样子”。

template(Pod Template - Pod 模板)

[root@k8s-master deployments]# cat nginx-deploy.yaml 
apiVersion: apps/v1 # deployment api版本
kind: Deployment #资源类型为Deployment
metadata:  #元信息
  labels:  #标签
    app: nginx-deploy #具体的key value。给 Deployment 自己打标签,不强制要求得要为nginx-deploy
  name: nginx-deploy #deployment的名字
  namespace: default #所在的命名空间
spec:
  replicas: 1 #期望副本数
  revisionHistoryLimit: 10 # 进行滚动更新后,保留的历史版本数
  selector: #选择器 用于找到匹配的label。告诉 Deployment 要管理哪些 Pod
    matchLabels: #按照标签匹配
      app: nginx-deploy #匹配的标签
  strategy: #更新策略
    rollingUpdate: #滚动更新配置
      maxSurge: 25%  #滚动更新时,更新的个数最多可以超过期望副本数的个数/比例 即10 + 0.25*10个
      maxUnavailable: 25% # #滚动更新时,最大不可用更新比例,表示在所有的副本数中,最多可以存在多少个不更新陈功 10-10*0.25 = 7.5个
    type: RollingUpdate #更新类型,采用滚动更新
  template: #pod模板
    metadata: #pod元信息
      labels: # pod标签。是给 Pod 模板打标签,确保创建的 Pod 被 Deployment 管理。
        app: nginx-deploy
    spec: #pod期望信息
      containers: #pod的容器
      - image: nginx:1.7.9 #进项
        imagePullPolicy: IfNotPresent
        name: nginx #容器名称
      restartPolicy: Always #重启策略
      terminationGracePeriodSeconds: 30 #删除操作最多宽限时间




Deployment
├── metadata
└── spec (Deployment 规范)
    ├── replicas
    ├── selector
    ├── strategy
    └── template (Pod 模板)
        ├── metadata
        └── spec (Pod 规范)
            ├── containers
            ├── restartPolicy
            └── ...
通过labels进行层级连接
位置 字段 作用
metadata.labels app: nginx-deploy 是给 Deployment 这个资源对象本身 打标签。方便管理(比如通过 kubectl get deploy -l app=nginx-deploy 来筛选)。
spec.selector.matchLabels app: nginx-deploy Deployment 使用这个选择器来找它要管理的 Pod。必须和 Pod 模板里的 labels 匹配,否则 Deployment 管理不到对应的 Pod。
spec.template.metadata.labels app: nginx-deploy 是给Pod 模板打标签。Deployment 根据这个模板创建 Pod,而这些 Pod 会被 selector 匹配上,从而由这个 Deployment 控制。

2.滚动更新

修改nginx-deploy的文件

将副本数从1修改到3
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:

[root@k8s-master deployments]# kubectl get deploy
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   3/3     1            3           61m
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   3/3     2            3           61m
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   3/3     3            3           62m
[root@k8s-master deployments]# kubectl rollout status deploy  nginx-deploy
deployment "nginx-deploy" successfully rolled out

滚动更新:
[root@k8s-master deployments]# kubectl set image deployment/nginx-deploy nginx=nginx:1.7.9
deployment.apps/nginx-deploy image updated #输出结果
[root@k8s-master deployments]# kubectl describe deploy nginx-deploy
Events:
  Type    Reason             Age                    From                   Message
  ----    ------             ----                   ----                   -------
  Normal  ScalingReplicaSet  18m                    deployment-controller  Scaled up replica set nginx-deploy-78d8bf4fd7 to 3
  Normal  ScalingReplicaSet  6m17s                  deployment-controller  Scaled up replica set nginx-deploy-754898b577 to 1
  Normal  ScalingReplicaSet  5m47s                  deployment-controller  Scaled down replica set nginx-deploy-78d8bf4fd7 to 2
  Normal  ScalingReplicaSet  5m47s                  deployment-controller  Scaled up replica set nginx-deploy-754898b577 to 2
  Normal  ScalingReplicaSet  5m21s                  deployment-controller  Scaled up replica set nginx-deploy-754898b577 to 3
  Normal  ScalingReplicaSet  5m21s                  deployment-controller  Scaled down replica set nginx-deploy-78d8bf4fd7 to 1
  Normal  ScalingReplicaSet  5m20s                  deployment-controller  Scaled down replica set nginx-deploy-78d8bf4fd7 to 0

动作: 更新 Deployment(例如修改了镜像版本),创建 1 个新版本的 Pod(由新 ReplicaSet 754898b577 管理)。
旧 ReplicaSet 缩容到 2 个 Pod(删除 1 个旧 Pod)。
新 ReplicaSet 扩容到 2 个 Pod(新增 1 个新 Pod)。
新 ReplicaSet 扩容到 3 个 Pod(再新增 1 个新 Pod)。
旧 ReplicaSet 缩容到 1 个 Pod(再删除 1 个旧 Pod)。
旧 Pod: 0 个
新 Pod: 3 个
总 Pod 数: 3(回归 replicas: 3)。

[root@k8s-master deployments]# kubectl get rs -l app=nginx-deploy 查看更新
NAME                      DESIRED   CURRENT   READY   AGE
nginx-deploy-754898b577   0         0         0       8m10s
nginx-deploy-78d8bf4fd7   3         3         3       69m
[root@k8s-master deployments]# kubectl get po --show-labels  
NAME                            READY   STATUS    RESTARTS   AGE    LABELS
nginx-deploy-78d8bf4fd7-6wrsn   1/1     Running   0          8m1s   app=nginx-deploy,pod-template-hash=78d8bf4fd7
nginx-deploy-78d8bf4fd7-7dlng   1/1     Running   0          8m2s   app=nginx-deploy,pod-template-hash=78d8bf4fd7
nginx-deploy-78d8bf4fd7-ms4kb   1/1     Running   0          8m4s   app=nginx-deploy,pod-template-hash=78d8bf4fd7

3.版本回退

将nginx的版本回退

[root@k8s-master deployments]# kubectl get rs --show-labels
NAME                      DESIRED   CURRENT   READY   AGE     LABELS
nginx-deploy-754898b577   0         0         0       134m    app=nginx-deploy,pod-template-hash=754898b577
nginx-deploy-78d8bf4fd7   3         3         3       3h15m   app=nginx-deploy,pod-template-hash=78d8bf4fd7

#模拟场景,将Nginx的Image镜像修改为一个没有的版本。
[root@k8s-master deployments]# kubectl set image deployment/nginx-deploy nginx=nginx:1.91 #并不会改变yaml文件的内容
deployment.apps/nginx-deploy image updated
[root@k8s-master deployments]# kubectl rollout status deployments nginx-deploy
Waiting for deployment "nginx-deploy" rollout to finish: 1 out of 3 new replicas have been updated...
[root@k8s-master deployments]# kubectl get rs
NAME                      DESIRED   CURRENT   READY   AGE
nginx-deploy-754898b577   0         0         0       139m
nginx-deploy-78d8bf4fd7   3         3         3       3h20m
nginx-deploy-f7f5656c7    1         1         0       62s  
[root@k8s-master deployments]# kubectl get pod
NAME                            READY   STATUS             RESTARTS   AGE
nginx-deploy-78d8bf4fd7-6wrsn   1/1     Running            0          136m
nginx-deploy-78d8bf4fd7-7dlng   1/1     Running            0          136m
nginx-deploy-78d8bf4fd7-ms4kb   1/1     Running            0          136m
nginx-deploy-f7f5656c7-ptx6s    0/1     ImagePullBackOff   0          83s
[root@k8s-master deployments]# kubectl describe pod nginx-deploy-f7f5656c7-ptx6s 
  Normal   SandboxChanged  2m10s                kubelet            Pod sandbox changed, it will be killed and re-created.
  Warning  Failed          81s (x3 over 2m11s)  kubelet            Failed to pull image "nginx:1.91": rpc error: code = Unknown desc = Error response from daemon: manifest for nginx:1.91 not found: manifest unknown: manifest unknown
  Warning  Failed          81s (x3 over 2m11s)  kubelet            Error: ErrImagePull
  Normal   BackOff         46s (x7 over 2m10s)  kubelet            Back-off pulling image "nginx:1.91"
  Warning  Failed          46s (x7 over 2m10s)  kubelet            Error: ImagePullBackOff
  Normal   Pulling         33s (x4 over 2m18s)  kubelet            Pulling image "nginx:1.91"
[root@k8s-master deployments]# kubectl rollout history deployment/nginx-deploy
deployment.apps/nginx-deploy  #查看历史版本
REVISION  CHANGE-CAUSE
2         <none>
3         <none>
4         <none>
[root@k8s-master deployments]# kubectl rollout history deployment/nginx-deploy --revision=2  #相关历史版本对应的镜像记录
deployment.apps/nginx-deploy with revision #2
Pod Template:
  Labels:	app=nginx-deploy
	pod-template-hash=754898b577
  Containers:
   nginx:
    Image:	nginx:1.9.1
    Port:	<none>
    Host Port:	<none>
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>

[root@k8s-master deployments]# kubectl rollout history deployment/nginx-deploy --revision=3
deployment.apps/nginx-deploy with revision #3
Pod Template:
  Labels:	app=nginx-deploy
	pod-template-hash=78d8bf4fd7
  Containers:
   nginx:
    Image:	nginx:1.7.9
    Port:	<none>
    Host Port:	<none>
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>

[root@k8s-master deployments]# kubectl rollout history deployment/nginx-deploy --revision=4
deployment.apps/nginx-deploy with revision #4
Pod Template:
  Labels:	app=nginx-deploy
	pod-template-hash=f7f5656c7
  Containers:
   nginx:
    Image:	nginx:1.91
    Port:	<none>
    Host Port:	<none>
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>
[root@k8s-master deployments]# kubectl rollout undo deployment/nginx-deploy --to-revision=2  #回退历史版本
deployment.apps/nginx-deploy rolled back
[root@k8s-master deployments]# kubectl rollout status deployment/nginx-deploy
deployment "nginx-deploy" successfully rolled out  #查看是否更新成功
[root@k8s-master deployments]# kubectl rollout history deployment/nginx-deploy --revision=5  #第五次回滚的信息
deployment.apps/nginx-deploy with revision #5
Pod Template:
  Labels:	app=nginx-deploy
	pod-template-hash=754898b577
  Containers:
   nginx:
    Image:	nginx:1.9.1
    Port:	<none>
    Host Port:	<none>
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>
[root@k8s-master deployments]# kubectl get rs #查看相关rs和pod信息
NAME                      DESIRED   CURRENT   READY   AGE
nginx-deploy-754898b577   3         3         3       147m
nginx-deploy-78d8bf4fd7   0         0         0       3h28m
nginx-deploy-f7f5656c7    0         0         0       9m1s
[root@k8s-master deployments]# kubectl get po
NAME                            READY   STATUS    RESTARTS   AGE
nginx-deploy-754898b577-7kdr5   1/1     Running   0          113s
nginx-deploy-754898b577-8bw2k   1/1     Running   0          112s
nginx-deploy-754898b577-fb8m2   1/1     Running   0          114s
[root@k8s-master deployments]# kubectl describe deploy nginx-deploy #回滚发生的事件
Events:
  Type    Reason             Age                   From                   Message
  ----    ------             ----                  ----                   -------
  Normal  ScalingReplicaSet  10m                   deployment-controller  Scaled up replica set nginx-deploy-f7f5656c7 to 1
  Normal  ScalingReplicaSet  3m20s                 deployment-controller  Scaled down replica set nginx-deploy-f7f5656c7 to 0
  Normal  ScalingReplicaSet  3m20s (x2 over 149m)  deployment-controller  Scaled up replica set nginx-deploy-754898b577 to 1
  Normal  ScalingReplicaSet  3m19s (x2 over 148m)  deployment-controller  Scaled up replica set nginx-deploy-754898b577 to 2
  Normal  ScalingReplicaSet  3m19s (x2 over 148m)  deployment-controller  Scaled down replica set nginx-deploy-78d8bf4fd7 to 2
  Normal  ScalingReplicaSet  3m18s (x2 over 148m)  deployment-controller  Scaled down replica set nginx-deploy-78d8bf4fd7 to 1
  Normal  ScalingReplicaSet  3m18s (x2 over 148m)  deployment-controller  Scaled up replica set nginx-deploy-754898b577 to 3
  Normal  ScalingReplicaSet  3m16s (x2 over 148m)  deployment-controller  Scaled down replica set nginx-deploy-78d8bf4fd7 to 0

4.扩容和缩容-scale/暂停

1.通过scale直接对pod进行扩容和缩容

[root@k8s-master deployments]# kbectl scale --help
[root@k8s-master deployments]# kubectl scale --replicas=6 deploy nginx-deploy
deployment.apps/nginx-deploy scaled  #扩容为6个
[root@k8s-master deployments]# kubectl get deploy
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   6/6     6            6           3h52m
[root@k8s-master deployments]# kubectl get rs
NAME                      DESIRED   CURRENT   READY   AGE
nginx-deploy-754898b577   6         6         6       171m
[root@k8s-master deployments]# kubectl get pod
NAME                            READY   STATUS    RESTARTS   AGE
nginx-deploy-754898b577-5xgtr   1/1     Running   0          24s
nginx-deploy-754898b577-7kdr5   1/1     Running   0          25m
nginx-deploy-754898b577-8bw2k   1/1     Running   0          25m
nginx-deploy-754898b577-dwq5g   1/1     Running   0          24s
nginx-deploy-754898b577-fb8m2   1/1     Running   0          25m
nginx-deploy-754898b577-h6d8p   1/1     Running   0          24s
#缩容为3个
[root@k8s-master deployments]# kubectl scale --replicas=3 deploy nginx-deploy
deployment.apps/nginx-deploy scaled
[root@k8s-master deployments]# kubectl get rs
NAME                      DESIRED   CURRENT   READY   AGE
nginx-deploy-754898b577   3         3         3       172m
nginx-deploy-78d8bf4fd7   0         0         0       3h53m
nginx-deploy-f7f5656c7    0         0         0       33m
[root@k8s-master deployments]# kubectl get deploy
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   3/3     3            3           3h53m
[root@k8s-master deployments]# kubectl get pod
NAME                            READY   STATUS    RESTARTS   AGE
nginx-deploy-754898b577-7kdr5   1/1     Running   0          26m
nginx-deploy-754898b577-8bw2k   1/1     Running   0          26m
nginx-deploy-754898b577-fb8m2   1/1     Running   0          26m
[root@k8s-master deployments]# kubectl get rs
NAME                      DESIRED   CURRENT   READY   AGE
nginx-deploy-754898b577   3         3         3       176m

2.通过对edit暂停更新pod–pause/resume

准备工作, kubectl edit deploy nginx-deploy中添加memory和cpu限制。

[root@k8s-master deployments]#  kubectl edit deploy nginx-deploy  #添加后自动更新,修改image的版本
spec:
      containers:
      - image: nginx:1.7.9
        imagePullPolicy: IfNotPresent
        name: nginx
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
[root@k8s-master deployments]# kubectl rollout history deploy nginx-deploy --revision=6  #查看第六次版本的记录,得到的结果
deployment.apps/nginx-deploy with revision #6
Pod Template:
  Labels:	app=nginx-deploy
	pod-template-hash=6d79d48b69
  Containers:
   nginx:
    Image:	nginx:1.9.1
    Port:	<none>
    Host Port:	<none>
    Requests:
      cpu:	100m
      memory:	128Mi
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>


[root@k8s-master deployments]# kubectl rollout pause deploy nginx-deploy  #暂停,防止每次更新文件后直接自动更新
deployment.apps/nginx-deploy paused
[root@k8s-master deployments]# kubectl edit deploy nginx-deploy
deployment.apps/nginx-deploy edited
#添加Limit限制
    spec:
      containers:
      - image: nginx:1.7.9
        imagePullPolicy: IfNotPresent
        name: nginx
        resources:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 128Mi
以及修改image版本号,那么这个时候如果wq退出,就不会直接更新deploy

开启更新 resume
See 'kubectl rollout -h' for help and examples
[root@k8s-master deployments]# kubectl rollout resume deploy nginx-deploy
deployment.apps/nginx-deploy resumed
[root@k8s-master deployments]# kubectl get rs
NAME                      DESIRED   CURRENT   READY   AGE
nginx-deploy-6d79d48b69   0         0         0       5m5s
nginx-deploy-754898b577   0         0         0       3h5m
nginx-deploy-78d8bf4fd7   0         0         0       4h6m
nginx-deploy-97c86cc69    3         3         3       14s
nginx-deploy-f7f5656c7    0         0         0       47m
[root@k8s-master deployments]# kubectl rollout history deploy nginx-deploy
deployment.apps/nginx-deploy 
REVISION  CHANGE-CAUSE
3         <none>
4         <none>
5         <none>
6         <none>
7         <none>
# 查看详细信息:
[root@k8s-master deployments]# kubectl rollout history deploy nginx-deploy --revision=7
deployment.apps/nginx-deploy with revision #7
Pod Template:
  Labels:	app=nginx-deploy
	pod-template-hash=97c86cc69
  Containers:
   nginx:
    Image:	nginx:1.7.9
    Port:	<none>
    Host Port:	<none>
    Limits:
      cpu:	500m
      memory:	512Mi
    Requests:
      cpu:	100m
      memory:	128Mi
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>
  
# 1. 暂停 Deployment 的滚动更新
kubectl rollout pause deployment my-app
# 2. 修改多个内容(不会立即触发滚动更新)
kubectl set image deployment my-app my-container=myimage:v2
kubectl edit deployment my-app  # 比如改环境变量
# 3. 手动恢复滚动更新
kubectl rollout resume deployment my-app

5.删除

法1:kubectl delete deployment nginx-deploy
法2:缩容 Deployment(保留 Deployment)
法3:kubectl delete -f <yaml-file>
不推荐直接使用kubectl delete pod 的方式删除deploy部署的pod

Deployment 控制器发现 .spec.template 被修改时,就会触发滚动更新(更新 ReplicaSet → 替换 Pod)。
pause 会暂时禁止控制器响应这些变更。
resume 才会让它继续处理模板变更。

8.有状态应用-statefulset

Headless service / Volumeclaim template

1.举例

service相关:
[root@k8s-master statefulset]# cat web.yaml 
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx   给 Service 自己打标签,用于标识这个对象
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx   表示这个 Service 会选择所有标签中有 app=nginx 的 Pod,并将流量转发给它们
---
apiVersion: apps/v1
kind: StatefulSet #属于StatefulSet类型的资源
metadata:
  name: web #statefulset对象的名字
spec:
  serviceName: "nginx" # 使用哪个service来管理dns
  replicas: 2
  selector:   告诉 StatefulSet 要管理哪些 Pod。必须和 Pod 模板的 labels 匹配
    matchLabels:  
      app: nginx
  template:
    metadata:
      labels:
        app: nginx   创建出来的每个 Pod 都会有这个标签,保证能被 StatefulSet 和 Service 匹配上
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports: #容器内部需要暴露的端口号
        - containerPort: 80
          name: web #端口配置的名字

[root@k8s-master statefulset]# kubectl create -f web.yaml 
service/nginx created
statefulset.apps/web created
[root@k8s-master statefulset]# kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          6s
web-1   1/1     Running   0          5s
[root@k8s-master statefulset]# kubectl get sts
NAME   READY   AGE
web    2/2     10s
[root@k8s-master statefulset]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   47h
nginx        ClusterIP   None         <none>        80/TCP    17s

[root@k8s-master statefulset]# kubectl run -it --image busybox:1.28.4  dns-test  /bin/sh
If you don't see a command prompt, try pressing enter.
/ # nslookup web-0.nginx
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      web-0.nginx
Address 1: 10.244.36.92 web-0.nginx.default.svc.cluster.local
/ # exit
Session ended, resume using 'kubectl attach dns-test -c dns-test -i -t' command when the pod is running

web-0.nginx.default.svc.cluster.local 解释:
  StatefulSet 中每个 Pod 的 DNS 格式为 statefulSetName-{0..N1}.serviceName.namespace.svc.cluster.local
  serviceName 为 Headless Service 的名字
  0..N-1 为 Pod 所在的序号,从 0 开始到 N-1
  statefulSetName 为 StatefulSet 的名字
  namespace 为服务所在的 namespace,Headless Servic 和 StatefulSet 必须在相同的 namespace
  .cluster.local 为 Cluster Domain

2.扩容和缩容

扩容:kubectl scale stateful web --replicas

[root@k8s-master statefulset]# kubectl scale sts web --replicas=5
statefulset.apps/web scaled
[root@k8s-master statefulset]# kubectl get sts
NAME   READY   AGE
web    5/5     24m
[root@k8s-master statefulset]# kubectl scale sts web --replicas=2
statefulset.apps/web scaled
[root@k8s-master statefulset]# kubectl describe sts web

Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  25m   statefulset-controller  create Pod web-0 in StatefulSet web successful
  Normal  SuccessfulCreate  25m   statefulset-controller  create Pod web-1 in StatefulSet web successful
  Normal  SuccessfulCreate  73s   statefulset-controller  create Pod web-2 in StatefulSet web successful
  Normal  SuccessfulCreate  72s   statefulset-controller  create Pod web-3 in StatefulSet web successful
  Normal  SuccessfulCreate  71s   statefulset-controller  create Pod web-4 in StatefulSet web successful
  Normal  SuccessfulDelete  3s    statefulset-controller  delete Pod web-4 in StatefulSet web successful
  Normal  SuccessfulDelete  2s    statefulset-controller  delete Pod web-3 in StatefulSet web successful
  Normal  SuccessfulDelete  1s    statefulset-controller  delete Pod web-2 in StatefulSet web successful
  
扩容是从0-1-2逐渐递增
缩容是从2-1-0逐渐递减

3.镜像更新

(目前还不支持直接更新image,也是就 kubectl set image 方式,需要使用patch来间接实现)

kubectl patch sts web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"nginx:1.9.1"}]'
op表示operation(操作)。"/spec/template/spec/containers/0/image" 表示是在哪个路径下的镜像

[root@k8s-master statefulset]# kubectl patch sts web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"nginx:1.9.1"}]'
statefulset.apps/web patched
[root@k8s-master statefulset]# kubectl rollout history sts web
statefulset.apps/web 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
[root@k8s-master statefulset]# kubectl rollout history sts web --revision=2
statefulset.apps/web with revision #2
Pod Template:
  Labels:	app=nginx
  Containers:
   nginx:
    Image:	nginx:1.9.1
    Port:	80/TCP
    Host Port:	0/TCP
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>
[root@k8s-master statefulset]# kubectl rollout status sts web
partitioned roll out complete: 2 new pods have been updated...
[root@k8s-master statefulset]# kubectl describe sts web
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.9.1
    Port:         80/TCP
  Normal  SuccessfulDelete  89s                statefulset-controller  delete Pod web-1 in StatefulSet web successful
  Normal  SuccessfulCreate  88s (x2 over 35m)  statefulset-controller  create Pod web-1 in StatefulSet web successful
  Normal  SuccessfulDelete  87s                statefulset-controller  delete Pod web-0 in StatefulSet web successful
  Normal  SuccessfulCreate  86s (x2 over 35m)  statefulset-controller  create Pod web-0 in StatefulSet web successful  #有序的更新和删除
  

4.灰度发布/rollingupdate

又称金丝雀发布 目标:将项目上线后产生的影响,尽量降到最低,先更新一两个Pod,如果没问题了,再全部更新.利用Partition这个参数

利用Parttion这个参数。
假设我有5个pod,那么partition的值就是0 1 2 3 4 .
如果将partition设置成1,表示将值大于等于1的更新掉。也就只有0没有更新了。
所以可以实现,通过修改不同的parttion的值来实现更新,如果将partition设置成为了0,那么就表示更新完成。

实现:
[root@k8s-master statefulset]# kubectl edit sts web
    spec:
      containers:
      - image: nginx:1.7.9
        imagePullPolicy: IfNotPresent
        name: nginx
        ports:
  updateStrategy:
    rollingUpdate:
      partition: 1

查看是否和预期一样:
[root@k8s-master statefulset]# kubectl describe pod web-0
    Container ID:   docker://3452d24e1995a01774dd8fe5a2dcf3bb0afc9e9120d01261ca5c40ffbb99cb55
    Image:          nginx:1.9.1
[root@k8s-master statefulset]# kubectl describe pod web-1
    Container ID:   docker://837ff5c212c9406774b751758b727f50b60c93e8e6c3b1d058741ec33d246a3b
    Image:          nginx:1.7.9
[root@k8s-master statefulset]# kubectl describe pod web-2
    Container ID:   docker://c17388e413e67bc75f92ee228d4bb6e811537a5d56067caf3f7a05b7f48f0c05
    Image:          nginx:1.7.9
和预期一样。现在将partition改为0
[root@k8s-master statefulset]# kubectl describe pod web-0 #得到结果,nginx镜像已经更新
    Container ID:   docker://b490868d28998eda55cd26534c3bd76c00fd4e2ecbb441bd46ac663b5960e6f0
    Image:          nginx:1.7.9

5.onDelete

删除更新:

即使修改模板了(通过kubectl edit sts)也不会更新,必须删除pod后才会自动的启动(不用敲命令)

kubectl edit sts web
      containers:
      - image: nginx:1.9.1  #修改镜像名,如果是以往,保存退出后就可能直接更新了
        imagePullPolicy: IfNotPresent
        name: nginx
        ports:
  updateStrategy:
    type: OnDelete  #修改更新策略

[root@k8s-master statefulset]# kubectl delete pod web-4
pod "web-4" deleted
[root@k8s-master statefulset]# kubectl get pod
NAME       READY   STATUS    RESTARTS      AGE
dns-test   1/1     Running   1 (69m ago)   74m
web-0      1/1     Running   0             7m2s
web-1      1/1     Running   0             22m
web-2      1/1     Running   0             22m
web-3      1/1     Running   0             24m
web-4      1/1     Running   0             6s   # 发现又重启了

[root@k8s-master statefulset]# kubectl describe pod web-4 #发现只有web-4发生了更新
Name:         web-4
    Container ID:   docker://ca93baf160b113ddf73b3fa1f7485d8080f153f5466904931584876f058a95fc
    Image:          nginx:1.9.1

[root@k8s-master statefulset]# kubectl describe pod web-3  #web-3并没有更新
Name:         web-3
Namespace:    default
    Container ID:   docker://ae46e9a107e39aab5e80f8593af2fa51b293410aed8246dac9af4e19944ae860
    Image:          nginx:1.7.9

6.deploy和sts对比

stateful并不关联replicas,replicas和deploy关联

原因:

在 Kubernetes 中,StatefulSet 和 Deployment 的工作机制不同,这解释了为什么你看到 replicas: 2 的 StatefulSet 没有创建 ReplicaSet(RS)。以下是具体原因和背后的逻辑:
1. StatefulSet 直接管理 Pod,不依赖 ReplicaSet
Deployment 的工作方式:
Deployment 通过创建和管理 ReplicaSet 来间接控制 Pod 的版本和扩缩容。每次更新(如镜像变更)会生成一个新的 ReplicaSet,旧 ReplicaSet 会被保留(用于回滚)。
因此 kubectl get rs 会显示 ReplicaSet。
StatefulSet 的工作方式:
StatefulSet 直接管理 Pod(不通过 ReplicaSet),目的是保证 Pod 的唯一性和稳定性(如固定的名称、持久化存储、有序扩缩容)。
因此 kubectl get rs 不会显示任何 ReplicaSet。
2. 为什么 StatefulSet 不需要 ReplicaSet?
有状态应用的特性:
StatefulSet 用于运行有状态服务(如数据库),需要:
稳定的网络标识(Pod 名称、DNS 记录)。
持久化存储(每个 Pod 绑定独立的 PV/PVC)。
有序部署/扩缩容(如 web-0 先于 web-1 启动)。
这些特性要求 StatefulSet 直接控制 Pod,而不是通过 ReplicaSet 的抽象层。
版本控制的差异:
StatefulSet 也支持滚动更新,但更新策略是按序替换 Pod(而非 Deployment 的 ReplicaSet 替换机制)。
可以通过 kubectl rollout history statefulset/web 查看更新历史。



特性 Deployment StatefulSet
用途 无状态应用(如前端服务) 有状态应用(如数据库)
Pod 管理方式 通过 ReplicaSet 间接管理 直接管理 Pod(也是可以通过relicas来配置副本数)
Pod 名称 随机哈希(如 nginx-5dfc6f4d7 固定顺序(如 web-0web-1
存储卷 通常共享存储 每个 Pod 独立持久化存储
DNS 记录 通过 Service 负载均衡 每个 Pod 有独立的 DNS 记录

虽然可以通过kubectl scale statefulsets web --replicas=3的方式来创建,但是StatefulSet 直接管理 Pod,不通过 ReplicaSet。不能通过kebectl get rs方式得到结果。Deployment 是一个高层抽象,它通过 ReplicaSet 来管理 Pod 的副本数。

7.删除

级联删除和非级联删除

# 删除 StatefulSet 和 Headless Service
# 级联删除:删除 statefulset 时会同时删除 pods
kubectl delete statefulset web
# 非级联删除:删除 statefulset 时不会删除 pods,删除 sts 后,pods 就没人管了,此时再删除 pod 不会重建的
kubectl delete sts web --cascade=false
# 删除 service
kubectl delete service nginx

级联删除  pod和sts全部删除
[root@k8s-master statefulset]# kubectl delete sts web
statefulset.apps "web" deleted
[root@k8s-master statefulset]# kubectl get sts
No resources found in default namespace.
[root@k8s-master statefulset]# kubectl get pod
NAME       READY   STATUS    RESTARTS      AGE
dns-test   1/1     Running   1 (96m ago)   101m  #这个并不是sts创建的
[root@k8s-master statefulset]# kubectl delete svc nginx
service "nginx" deleted
[root@k8s-master statefulset]# 


非级联删除  只删除了sts,无法删除pod,能通过kubectl delete pod web-0
[root@k8s-master statefulset]# kubectl get pod
NAME       READY   STATUS    RESTARTS      AGE
dns-test   1/1     Running   1 (98m ago)   102m
web-0      1/1     Running   0             10s
web-1      1/1     Running   0             8s
[root@k8s-master statefulset]# kubectl delete sts web --cascade=false
warning: --cascade=false is deprecated (boolean value) and can be replaced with --cascade=orphan.
statefulset.apps "web" deleted
[root@k8s-master statefulset]# kubectl get sts
No resources found in default namespace.
[root@k8s-master statefulset]# kubectl get po
NAME       READY   STATUS    RESTARTS      AGE
dns-test   1/1     Running   1 (99m ago)   104m
web-0      1/1     Running   0             2m
web-1      1/1     Running   0             118s
[root@k8s-master statefulset]# kubectl delete pod web-0
pod "web-0" deleted
^[[A[root@k8s-master statefulset]# kubectl delete pod web-1
pod "web-1" deleted
[root@k8s-master statefulset]# kubectl delete svc nginx
service "nginx" deleted

9.daemonset

1.理论

为每一个匹配的node都部署一个守护进程。比如日志监控
在这里插入图片描述

每一个node都会有一个type,假如我需要将fluntend(用于日志收集)搜集相匹配的节点的日志,那么会将type进行匹配,如果能匹配成功,即type=microservices,就部署。所以当有一个新的node节点时,如果type匹配上了,fluented的pod就直接去新的节点,直接部署fluntend

2.例子

DaemonSet 会忽略 Node 的 unschedulable 状态,有两种方式来指定 Pod 只运行在指定的 Node 节点上:

  • nodeSelector:只调度到匹配指定 label 的 Node 上
  • nodeAffinity:功能更丰富的 Node 选择器,比如支持集合操作
  • podAffinity:调度到满足条件的 Pod 所在的 Node 上
nodeSelector 例子:
[root@k8s-master daemonset]# cat fluentd-ds.yaml 
apiVersion: apps/v1
kind: DaemonSet # 创建daemonset资源 
metadata:
  name: fluentd #指定资源名
spec:
  selector:  
    matchLabels:
      app: logging
  template:
    metadata:
      labels:
        app: logging
        id: fluentd
      name: fluentd
    spec:
      containers:
      - name: fluentd-es
        image: agilestacks/fluentd-elasticsearch:v1.3.0
        env: # 环境变量配置
         - name: FLUENTD_ARGS #环境变量的Key
           value: -qq #环境变量的value
        volumeMounts: #加载数据卷,避免数据丢失
         - name: containers # 数据卷名字
           mountPath: /var/lib/docker/containers #将数据卷挂载到容器内的哪个目录
         - name: varlog
           mountPath: /varlog
      volumes: #定义数据卷
         - hostPath: #数据卷类型,主机路径的模式,也就是与node共享目录
             path: /var/lib/docker/containers # node中的共享目录
           name: containers #定义的数据卷的名称
         - hostPath:
             path: /var/log
           name: varlog
           
spec.selector.matchLabels 必须与 spec.template.metadata.labels 完全匹配

[root@k8s-master daemonset]# kubectl get pod -o wide 未指定标签时,平均分配到两个节点
NAME            READY   STATUS    RESTARTS       AGE    IP               NODE        NOMINATED NODE   READINESS GATES
dns-test        1/1     Running   1 (143m ago)   148m   10.244.169.148   k8s-node2   <none>           <none>
fluentd-kv6zt   1/1     Running   0              51s    10.244.169.157   k8s-node2   <none>           <none>
fluentd-zmf4b   1/1     Running   0              51s    10.244.36.106    k8s-node1   <none>           <none>

[root@k8s-master daemonset]# kubectl label no k8s-node1 type=microservices
node/k8s-node1 labeled #未node1添加标签
[root@k8s-master daemonset]# kubectl get nodes --show-labels
NAME         STATUS   ROLES                  AGE    VERSION   LABELS
k8s-master   Ready    control-plane,master   2d1h   v1.23.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node1    Ready    <none>                 2d1h   v1.23.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,type=microservices
k8s-node2    Ready    <none>                 2d1h   v1.23.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux

[root@k8s-master daemonset]# kubectl get ds 
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR        AGE
fluentd   1         1         1       1            1           type=microservices   9m40s
[root@k8s-master daemonset]# kubectl get po -o wide  #发现就只有标签为type=microservices的node1节点又fluentd
NAME            READY   STATUS    RESTARTS       AGE    IP               NODE        NOMINATED NODE   READINESS GATES
dns-test        1/1     Running   1 (152m ago)   157m   10.244.169.148   k8s-node2   <none>           <none>
fluentd-4x69b   1/1     Running   0              42s    10.244.36.107    k8s-node1   <none>           <none>

[root@k8s-master daemonset]# kubectl label no k8s-node2 type=microservices #为节点为node2的节点添加标签
node/k8s-node2 labeled
[root@k8s-master daemonset]# kubectl get po -l app=logging -o wide #可以发现出现两个节点都部署了
NAME            READY   STATUS    RESTARTS   AGE    IP               NODE        NOMINATED NODE   READINESS GATES
fluentd-4x69b   1/1     Running   0          117s   10.244.36.107    k8s-node1   <none>           <none>
fluentd-9wn9x   1/1     Running   0          4s     10.244.169.158   k8s-node2   <none>           <none>

10.HPA-自动扩容

1.相关理论

通过观察 pod 的 cpu、内存使用率或自定义 metrics 指标进行自动的扩容或缩容 pod 的数量

Pod 自动扩容:可以根据 CPU 使用率或自定义指标(metrics)自动对 Pod 进行扩/缩容。(针对deploy和sts,不支持daemonset)

实现 cpu 或内存的监控,首先有个前提条件是该对象必须配置了 resources.requests.cpu 或 resources.requests.memory 才可以,可以配置当 cpu/memory 达到上述配置的百分比后进行扩容或缩容

  • 控制管理器每隔30s(可以通过–horizontal-pod-autoscaler-sync-period修改)查询metrics的资源使用情况
  • 支持三种metrics类型
    • 预定义metrics(比如Pod的CPU)以利用率的方式计算
    • 自定义的Pod metrics,以原始值(raw value)的方式计算
    • 自定义的object metrics
  • 支持两种metrics查询方式:Heapster和自定义的REST API
  • 支持多metrics

2.实验

执行命令添加: kubectl autoscale deploy <deploy_name> --cpu-percent=20(百分比) --min=2(最小节点数) --max=5(最大节点数)

# 配置什么时候开始扩容缩容
[root@k8s-master deployments]#  kubectl autoscale deploy nginx-deploy --cpu-percent=20 --min=2 --max=5
horizontalpodautoscaler.autoscaling/nginx-deploy autoscaled

[root@k8s-master deployments]# ls
nginx-deploy.yaml  nginx-svc.yaml
[root@k8s-master deployments]# cat nginx-deploy.yaml 
apiVersion: apps/v1 # deployment api版本
kind: Deployment #资源类型为Deployment
metadata:  #元信息
  labels:  #标签
    app: nginx-deploy #具体的key value
  name: nginx-deploy #deployment的名字
  namespace: default #所在的命名空间
spec:
  replicas: 1 #期望副本数
  revisionHistoryLimit: 10 # 进行滚动更新后,保留的历史版本数
  selector: #选择器 用于找到匹配的rs
    matchLabels: #按照标签匹配
      app: nginx-deploy #匹配的标签
  strategy: #更新策略
    rollingUpdate: #滚动更新配置
      maxSurge: 25%  #滚动更新时,更新的个数最多可以超过期望副本数的个数/比例 即10 + 0.25*10个
      maxUnavailable: 25% # #滚动更新时,最大不可用更新比例,表示在所有的副本数中,最多可以存在多少个不更新陈功 10-10*0.25 = 7.5个
    type: RollingUpdate #更新类型,采用滚动更新
  template: #pod模板
    metadata: #pod元信息
      labels: # pod标签
        app: nginx-deploy
    spec: #pod期望信息
      containers: #pod的容器
      - image: nginx:1.7.9 #进项
        imagePullPolicy: IfNotPresent
        name: nginx #容器名称
        resources:
          limits:  #配置相关限制
            cpu: 20m
            memory: 128Mi
          requests:
            cpu: 100m
            memory: 128Mi 
      restartPolicy: Always #重启策略
      terminationGracePeriodSeconds: 30 #删除操作最多宽限时间

[root@k8s-master deployments]# cat nginx-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  labels:
    app: nginx
spec:
  selector:
    app: nginx-deploy
  ports:
  - port: 80
    targetPort: 80
    name: web
  type: NodePort


[root@k8s-master deployments]# kubectl create -f nginx-svc.yaml 
service/nginx-svc created
[root@k8s-master deployments]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        12d
nginx-svc    NodePort    10.106.136.231   <none>        80:30386/TCP   5s

[root@k8s-master deployments]# curl  10.106.136.231
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>


wget https://github.com/kubernetes-sigs/metrics-# 修改镜像地址为国内的地址
sed -i 's/k8s.gcr.io\/metrics-server/registry.cn-  hangzhou.aliyuncs.com\/google_containers/g' metrics-server-components.yaml #修改镜像源

# 修改容器的 tls 配置,不验证 tls,在 containers 的 args 参数中增加 --kubelet-insecure-tls 参数server/releases/latest/download/components.yaml -O metrics-server-components.yaml

 kubectl apply -f metrics-server-components.yaml  安装
[root@k8s-master componets]# docker pull registry.k8s.io/metrics-server/metrics-server:v0.6.2
# 查看是否安装好
[root@k8s-master componets]# kubectl get pod --all-namespaces |grep metrics
kube-system   metrics-server-6c9c8ddb-x6gwx              1/1     Running   0               9m6s

[root@k8s-master componets]# kubectl top pods
NAME                           CPU(cores)   MEMORY(bytes)             
nginx-deploy-56696fbb5-2zbdm   0m           1Mi             
nginx-deploy-56696fbb5-w2w22   0m           1Mi    

在另外两台主机上发起死循环
[root@k8s-node1 ~]# while true; do wget -q -O- http://10.106.136.231 > /dev/null ; done
[root@k8s-master deployments]# kubectl get hpa  cpu使用上升
NAME           REFERENCE                 TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx-deploy   Deployment/nginx-deploy   90%/20%   2         5         2          38m

[root@k8s-master deployments]# kubectl get po
NAME                          READY   STATUS    RESTARTS      AGE
nginx-deploy-c4986b7f-49hcm   1/1     Running   0             2m48s
nginx-deploy-c4986b7f-7cv5n   1/1     Running   0             29s
nginx-deploy-c4986b7f-dvbtt   1/1     Running   0             29s
nginx-deploy-c4986b7f-dwn7m   1/1     Running   0             14s
nginx-deploy-c4986b7f-rlznh   1/1     Running   0             2m50s


[root@k8s-master deployments]# kubectl top po
NAME                          CPU(cores)   MEMORY(bytes)            
nginx-deploy-c4986b7f-49hcm   33m          1Mi             
nginx-deploy-c4986b7f-7cv5n   33m          1Mi             
nginx-deploy-c4986b7f-dvbtt   33m          1Mi             
nginx-deploy-c4986b7f-dwn7m   33m          1Mi             
nginx-deploy-c4986b7f-rlznh   32m          1Mi     

结束死循环:
[root@k8s-master deployments]# kubectl get hpa
NAME           REFERENCE                 TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx-deploy   Deployment/nginx-deploy   56%/20%   2         5         5          42m
[root@k8s-master deployments]# kubectl get hpa  #可以看到降低了
NAME           REFERENCE                 TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx-deploy   Deployment/nginx-deploy   0%/20%    2         5         5          42m

# 经过几分钟后,可以看到deploy的pod数量下降了
[root@k8s-master deployments]# kubectl get deploy
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   5/5     5            5           54m
[root@k8s-master deployments]# kubectl get deploy
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   2/2     2            2           56m


# 查看背后的输出
[root@k8s-master deployments]# kubectl describe hpa nginx-deploy
  Type     Reason                        Age                   From                       Message
  ----     ------                        ----                  ----                       -------
  Normal   SuccessfulRescale             43m                   horizontal-pod-autoscaler  New size: 2; reason: Current number of replicas below Spec.MinReplicas
  Warning  FailedComputeMetricsReplicas  40m (x12 over 42m)    horizontal-pod-autoscaler  invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
  Warning  FailedGetResourceMetric       38m (x20 over 42m)    horizontal-pod-autoscaler  failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
  Warning  FailedGetResourceMetric       28m (x21 over 33m)    horizontal-pod-autoscaler  failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
  Warning  FailedGetResourceMetric       25m (x3 over 26m)     horizontal-pod-autoscaler  failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
  Warning  FailedComputeMetricsReplicas  25m (x3 over 26m)     horizontal-pod-autoscaler  invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
  Warning  FailedGetResourceMetric       7m3s (x2 over 7m18s)  horizontal-pod-autoscaler  failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
  Warning  FailedComputeMetricsReplicas  7m3s (x2 over 7m18s)  horizontal-pod-autoscaler  invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
  Warning  FailedGetResourceMetric       6m48s                 horizontal-pod-autoscaler  failed to get cpu utilization: did not receive metrics for any ready pods
  Warning  FailedComputeMetricsReplicas  6m48s                 horizontal-pod-autoscaler  invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: did not receive metrics for any ready pods
  Normal   SuccessfulRescale             5m3s                  horizontal-pod-autoscaler  New size: 4; reason: cpu resource utilization (percentage of request) above target
  Normal   SuccessfulRescale             4m48s                 horizontal-pod-autoscaler  New size: 5; reason: cpu resource utilization (percentage of request) above target

kubectl descirbe deploy nginx-deploy
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  79m   deployment-controller  Scaled up replica set nginx-deploy-78d8bf4fd7 to 1
  Normal  ScalingReplicaSet  73m   deployment-controller  Scaled up replica set nginx-deploy-56696fbb5 to 1
  Normal  ScalingReplicaSet  73m   deployment-controller  Scaled down replica set nginx-deploy-78d8bf4fd7 to 0
  Normal  ScalingReplicaSet  70m   deployment-controller  Scaled up replica set nginx-deploy-56696fbb5 to 2
  Normal  ScalingReplicaSet  34m   deployment-controller  Scaled up replica set nginx-deploy-c4986b7f to 1
  Normal  ScalingReplicaSet  34m   deployment-controller  Scaled down replica set nginx-deploy-56696fbb5 to 1
  Normal  ScalingReplicaSet  34m   deployment-controller  Scaled up replica set nginx-deploy-c4986b7f to 2
  Normal  ScalingReplicaSet  34m   deployment-controller  Scaled down replica set nginx-deploy-56696fbb5 to 0
  Normal  ScalingReplicaSet  32m   deployment-controller  Scaled up replica set nginx-deploy-c4986b7f to 4
  Normal  ScalingReplicaSet  32m   deployment-controller  Scaled up replica set nginx-deploy-c4986b7f to 5
  Normal  ScalingReplicaSet  23m   deployment-controller  Scaled down replica set nginx-deploy-c4986b7f to 2
[root@k8s-master deployments]# kubectl describe po nginx-po 


11.service

1.逻辑图

service主要是涉及pod与pod之间联系,ingress是涉及用户-》pod

逻辑图:
在这里插入图片描述

2.配置文件讲解

[root@k8s-master services]# cat nginx-svc.yaml 
apiVersion: v1
kind: Service #资源类型
metadata:
  name: nginx-svc #svc名字
  labels:
    app: nginx #service自己本身的标签
spec:
  selector: #匹配到的哪些pod会被service 代理
    app: nginx-deploy  #所有匹配到这些标签的pod都可以通过改service进行访问
  ports: # 访问 Service 的 80 端口会被转发到 Pod 的 80 端口
  - port: 80 # service本身端口,在使用内网ip访问时使用.当其他 Pod 或服务通过 Service 名称(如 nginx-svc)访问时,需要使用这个端口
    targetPort: 80 #目标pod的端口
    name: web #为端口起个名字
  type: NodePort #有多种类型,随机启动一个端口(30000-32767),进行映射到ports中的端口,该端口是直接绑定在node上的,且集群中的每一个node都会绑定这个端口。也可以用于将服务暴露给外部访问,但是这种方式实际生产环境不推荐,效率较低,而且service是四层负载。
  
不指定 selector 属性,那么需要自己创建 endpoint
客户端 → NodeIP:30080 (nodePort)
       → Service:80 (port)
       → Pod:8080 (targetPort pod的端口)
字段名 位置 说明
targetPort Pod 容器内部实际监听的端口。即 Service 最终要转发请求的 Pod 中运行服务的端口。
port Service Service 对外暴露的端口。集群内部访问 Service(例如通过 ClusterIP 或服务名)时使用这个端口。
nodePort Node(可选) 宿主机上的端口号,用于从集群外部访问该服务。系统会在 30000~32767 范围内随机分配一个,或者你可以手动指定。
  1. ClusterIP 访问(内部访问)
    ServiceIP:80targetPort:8080(Pod)
  2. NodePort 访问(外部访问)
    NodeIP:30080ServiceIP:80targetPort:8080(Pod)

3.相关命令

# 创建 service
kubectl create -f nginx-svc.yaml

# 查看 service 信息,通过 service 的 cluster ip 进行访问
kubectl get svc 

# 查看 pod 信息,通过 pod 的 ip 进行访问
kubectl get po -o wide

# 创建其他 pod 通过 service name 进行访问(推荐)
kubectl exec -it busybox -- sh
curl http://nginx-svc

# 默认在当前 namespace 中访问,如果需要跨 namespace 访问 pod,则在 service name 后面加上 .<namespace> 即可
curl http://nginx-svc.default




[root@k8s-master services]# kubectl get svc  #查看当前的svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        12d
nginx-svc    NodePort    10.106.136.231   <none>        80:30386/TCP   16h
[root@k8s-master services]# kubectl describe svc nginx-svc #查看svc的详细内容
Name:                     nginx-svc
Namespace:                default
Labels:                   app=nginx
Annotations:              <none>
Selector:                 app=nginx-deploy
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.136.231
IPs:                      10.106.136.231
Port:                     web  80/TCP
TargetPort:               80/TCP
NodePort:                 web  30386/TCP
Endpoints:                10.244.169.178:80,10.244.36.119:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
[root@k8s-master services]# 


[root@k8s-master ~]# kubectl get endpoints #可以查看到绑定了哪些pod
NAME         ENDPOINTS                           AGE
kubernetes   192.168.157.131:6443                12d
nginx-svc    10.244.36.119:80,10.244.36.120:80   15h
[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        12d
nginx-svc    NodePort    10.106.136.231   <none>        80:30386/TCP   15h
[root@k8s-master ~]# kubectl get pod
NAME                          READY   STATUS    RESTARTS      AGE
nginx-deploy-c4986b7f-49hcm   1/1     Running   1 (15h ago)   15h
nginx-deploy-c4986b7f-7cv5n   1/1     Running   1 (15h ago)   15h


可以怎样访问pod:
1,直接通过宿主机的ip+映射的宿主机端口(生产环境不推荐)
2,通过svc的ip:svc的端口访问。

通过内部busybox访问svc:

[root@k8s-master services]# kubectl exec -it dns-test -- sh
/ # ^C
/ # wget http://nginx-svc
Connecting to nginx-svc (10.106.136.231:80)
index.html           100% |*******************************************************************|   612   0:00:00 ETA
/ # cat index.html 
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ # 
下面为访问结构图

在这里插入图片描述

service是命名空间级别。可以通过nginx-svc.defalut的方式访问。

4.service实现外部访问-ip地址

集群外部访问,也就是让k8s集群访问外部。通过自建endpoint的方式。

在这里插入图片描述

[root@k8s-master services]# cat nginx-svc-exernal.yaml 
apiVersion: v1
kind: Service #资源类型
metadata:
  name: nginx-svc-external #svc名字
  labels:
    app: nginx #service自己本身的标签
spec:
  ports:
  - port: 80 # service本身端口,在使用内网ip访问时使用
    targetPort: 80 #目标pod的端口
    name: web #为端口起个名字
  type: ClusterIP 

[root@k8s-master services]# cat nginx-ep-external.yaml 
apiVersion: v1
kind: Endpoints
metadata:
  labels:
    app: nginx # 与 service 一致
  name: nginx-svc-external # 与 service 一致
  namespace: default # 与 service 一致
subsets:
- addresses:
  - ip: 120.78.159.117  # 目标 ip 地址 叩丁狼官网。通过ip地址访问。
  ports: # 与 service 一致
  - name: web
    port: 80
    protocol: TCP

[root@k8s-master services]# kubectl create -f nginx-svc-exernal.yaml  创建svc-external
service/nginx-svc-external created #因为没有指定selector,需要自建endpoint
[root@k8s-master services]# kubectl get svc
NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes           ClusterIP   10.96.0.1        <none>        443/TCP        12d
nginx-svc            NodePort    10.106.136.231   <none>        80:30386/TCP   17h
nginx-svc-external   ClusterIP   10.103.156.162   <none>        80/TCP         8s

[root@k8s-master services]# kubectl create -f nginx-ep-external.yaml  创建ep
endpoints/nginx-svc-external created
[root@k8s-master services]# ls
nginx-ep-external.yaml  nginx-svc-exernal.yaml  nginx-svc.yaml
[root@k8s-master services]# kubectl get ep
NAME                 ENDPOINTS                            AGE
kubernetes           192.168.157.131:6443                 12d
nginx-svc            10.244.169.178:80,10.244.36.119:80   17h
nginx-svc-external   120.78.159.117:80                    6s

[root@k8s-master services]# kubectl describe ep nginx-svc-external 详情
Name:         nginx-svc-external
Namespace:    default
Labels:       app=nginx
Annotations:  <none>
Subsets:
  Addresses:          120.78.159.117
  NotReadyAddresses:  <none>
  Ports:
    Name  Port  Protocol
    ----  ----  --------
    web   80    TCP

Events:  <none>

[root@k8s-master services]# kubectl exec -it dns-test -- sh  
/ # wget http://nginx-svc-external #此时就可以通过服务名来访问了
Connecting to nginx-svc-external (10.103.156.162:80)
Connecting to www.wolfcode.cn (120.78.159.117:80)
index.html           100% |*******************************************************************| 72518   0:00:00 ETA
/ # ls


5.service实现外部访问-域名方式

[root@k8s-master services]# cat nginx-svc-externalname.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    app: wolfcode-external-domain
  name: wolfcode-external-domain
spec:
  type: ExternalName
  externalName: www.wolfcode.cn

[root@k8s-master services]# vim nginx-svc-externalname.yaml
[root@k8s-master services]# kubectl create -f nginx-svc-externalname.yaml 
service/wolfcode-external-domain created
[root@k8s-master services]# kubectl get svc
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)        AGE
kubernetes                 ClusterIP      10.96.0.1        <none>            443/TCP        12d
nginx-svc                  NodePort       10.106.136.231   <none>            80:30386/TCP   18h
nginx-svc-external         ClusterIP      10.103.156.162   <none>            80/TCP         29m
wolfcode-external-domain   ExternalName   <none>           www.wolfcode.cn   <none>         5s

[root@k8s-master services]# kubectl exec -it dns-test -- sh #
/ # wget wolfcode-external-domain #通过service访问
Connecting to wolfcode-external-domain (120.78.159.117:80)
Connecting to www.wolfcode.cn (120.78.159.117:80)
index.html           100% |*******************************************************************| 72518   0:00:00 ETA
/ # cat index.html 

ClusterIP:只能在集群内部访问,不能出外网

ExternalName:通过域名

NodePort:会在所有安装了 kube-proxy 的节点都绑定一个端口,此端口可以代理至对应的 Pod,集群外部可以使用任意节点 ip + NodePort 的端口号访问到集群中对应 Pod 中的服务。可以通过nodePort的方式固定绑定node节点上的port。(效率低,一般不用这种方式)

当类型设置为 NodePort 后,可以在 ports 配置中增加 nodePort 配置指定端口,需要在下方的端口范围内,如果不指定会随机指定端口.

loadbalancer:云服务器使用

6.clusterip

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name:  service-clusterip
spec:
  replicas: 3 
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx



apiVersion: v1
kind: Service
metadata:
  name: service-clusterip
  namespace: default
spec:
  ports:
  - name: http
    port: 1234  #这里没有指定 Service 的发布类型,默认就是 ClusterIP。同时这里将容器中的 80 端口暴露成了 1234 端口
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx

只能集群内部使用。ClusterIP 是 Service 的默认类型,它为服务分配一个 集群内部的虚拟 IP(即 ClusterIP),仅允许集群内其他组件(Pod、Service 等)访问。
外部用户无法直接访问 ClusterIP。
类似于一个内部负载均衡器,隐藏后端 Pod 的细节。
集群内客户端(Pod/Service)→ ClusterIP:Port → Service 负载均衡 → Pod:TargetPort
https://blog.csdn.net/be_racle/article/details/141113315 

在这里插入图片描述

12.ingress-外部服务发现

可以理解成替代了传统的nginx

在这里插入图片描述

安装:

helm repo add ingress-nginx "https://helm-charts.itboon.top/ingress-nginx"
helm search repo ingress-nginx/ingress-nginx --versions|grep 4.4.2
helm pull ingress-nginx/ingress-nginx --version 4.4.2

安装ingress-nginx时,可以不修改values.yaml中的image镜像源,可以先通过docker pull拉取到本地。


[root@k8s-master ingress]# kubectl get svc # 提前安装好svc
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)        AGE
kubernetes                 ClusterIP      10.96.0.1        <none>            443/TCP        13d
nginx-svc                  NodePort       10.106.136.231   <none>            80:30386/TCP   23h

nginx-svc(NodePort)作用:
这个 Service 将流量路由到带有标签 app: nginx-deploy 的 Pod
通过 NodePort 类型暴露服务,外部可以通过节点的 30386 端口访问

[root@k8s-master ingress]# cat wolfcode-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress # 资源类型为 Ingress
metadata:
  name: wolfcode-nginx-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"  #表示使用nginx-ingress
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules: # ingress 规则配置,可以配置多个
  - host: k8s.wolfcode.cn # 域名配置,可以使用通配符 *
    http:
      paths: # 相当于 nginx 的 location 配置,可以配置多个
      - pathType: Prefix #描述路径类型,按照路径类型就行匹配, ImplementatitionSpecific 需要指定 ingressclass,具体规则以ingressclass中的规则为准,exact:精确匹配,url需要与path完全匹配上,且区分大小写,prefix:以/ 作为分隔符来进行前缀匹配
        backend:
          service:
            name: nginx-svc #代理到哪个 service
            port:
              number: 80 # service的端口
        path: /api # 等价于 nginx 中的 location 的路径前缀匹配

[root@k8s-master ingress]#  kubectl create -f wolfcode-ingress.yaml   #安装
ingress.networking.k8s.io/wolfcode-nginx-ingress created
[root@k8s-master ingress]# kubectl get ingress
NAME                     CLASS    HOSTS             ADDRESS          PORTS   AGE
wolfcode-nginx-ingress   <none>   k8s.wolfcode.cn   10.105.245.166   80      4m26s

[root@k8s-master ingress]# kubectl get po -n ingress-nginx -o wide
NAME                             READY   STATUS    RESTARTS   AGE    IP                NODE        NOMINATED NODE   READINESS GATES
ingress-nginx-controller-4xh2s   1/1     Running   0          110m   192.168.157.132   k8s-node1   <none>           <none>

浏览器中访问:http://k8s.wolfcode.cn/api/index.html
通过windows机器中绑定hosts,从浏览器中访问k8s.wolfcode.cn,也就是访问到了ingress(在这里就是ip+NodePort)-》nginx-svc-》转发到相应的pod(此时无论nginx的pod在哪个服务器,都是可以访问的,因为会有相应的日志,通过nginx-svc进行负载均衡)

Ingress 的真正作用:通过域名和路径规则路由流量,而不是单纯暴露端口,也就是对接service(nodeport)


13.存储configmap/secret

1.configmap

configmap直接通过key-value的方式。secret是将文件加密

1.例子:主要以命令的方式

[root@k8s-master config]# pwd
/opt/k8s/config
[root@k8s-master config]# kubectl create cm -h
Examples:
  # Create a new config map named my-config based on folder bar
  kubectl create configmap my-config --from-file=path/to/bar
  # Create a new config map named my-config with specified keys instead of file basenames on disk
  kubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt
--from-file=key2=/path/to/bar/file2.txt
  # Create a new config map named my-config with key1=config1 and key2=config2
  kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2  
  # Create a new config map named my-config from the key=value pairs in the file
  kubectl create configmap my-config --from-file=path/to/bar
  # Create a new config map named my-config from an env file
  kubectl create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env

[root@k8s-master config]# cd /opt/k8s/config/test/
[root@k8s-master test]# ls
db.properties  redis.properties
[root@k8s-master test]# cat db.properties 
username=root
password=admin
[root@k8s-master test]# cat redis.properties 
host:127.0.0.1
port:6379
[root@k8s-master test]# 

# 方式1 绑定文件夹
[root@k8s-master config]# kubectl create configmap test-dir-config --from-file=./test/
configmap/test-dir-config created
[root@k8s-master config]# kubectl get cm
NAME               DATA   AGE
kube-root-ca.crt   1      13d
test-dir-config    2      11s
[root@k8s-master config]# kubectl describe cm test-dir-config
Name:         test-dir-config
Namespace:    default
Labels:       <none>
Annotations:  <none>
Data
====
db.properties:
----
username=root
password=admin
redis.properties:
----
host:127.0.0.1
port:6379
BinaryData
====
Events:  <none>

# 方式2  单个文件
/opt/k8s/config/
[root@k8s-master config]# cat application.yaml 
spring:
  application:
    name: test-app 
server:
  port: 8080
[root@k8s-master config]# kubectl create cm spring-boot-test-yaml --from-file=/opt/k8s/config/application.yaml 
configmap/spring-boot-test-yaml created
[root@k8s-master config]# kubectl get cm
NAME                    DATA   AGE
kube-root-ca.crt        1      13d
spring-boot-test-yaml   1      6s
test-dir-config         2      4m38s
[root@k8s-master config]# kubectl describe cm spring-boot-test-yaml
Name:         spring-boot-test-yaml
Namespace:    default
Labels:       <none>
Annotations:  <none>
Data
====
application.yaml:
----
spring:
  application:
    name: test-app 
server:
  port: 8080
BinaryData
====

# 方式3 通过重命名的方式
[root@k8s-master config]# kubectl create cm spring-boot-test-alias-yaml --from-file=app.yaml=/opt/k8s/config/application.yaml 
configmap/spring-boot-test-alias-yaml created
[root@k8s-master config]# kubectl describe configmap spring-boot-test-alias-yaml
Name:         spring-boot-test-alias-yaml  #名字已经修改
Namespace:    default
Labels:       <none>
Annotations:  <none>
Data
====
app.yaml:
----
spring:
  application:
    name: test-app 
server:
  port: 8080
BinaryData
====

#方式4 通过命令的方式
[root@k8s-master config]# kubectl create configmap test-key-value-config --from-literal=username=root --from-literal=password=admin
configmap/test-key-value-config created
[root@k8s-master config]# kubectl describe cm  test-key-value-config
Name:         test-key-value-config
Namespace:    default
Labels:       <none>
Annotations:  <none>
Data
====
password:
----
admin
username:
----
root
BinaryData
====
Events:  <none>


2.应用

1.加载命令行的变量
[root@k8s-master config]# cat env-test-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-env-po
spec:
  containers:
    - name: env-test
      image: alpine
      command: ["/bin/sh","-c","env;sleep 3600"]
      imagePullPolicy: IfNotPresent
      env:
      - name: JAVA_VM_OPTS # 本地自定义的名字
        valueFrom:
          configMapKeyRef:
            name: test-env-config # configmap的名字 表示需要从这个name中获取
            key: JAVA_OPTS_TEST # 表示从name 的configmap中获取名字为key的value,将其赋值给本地环境变量 JAVA_OPTS_TEST
      - name: APP
        valueFrom:
          configMapKeyRef:
            name: test-env-config
            key: APP_NAME
  restartPolicy: Never

kubectl create -f env-test-pod.yaml
kubectl get po
kubectl logs -f po test-env-po 
#得到结果
KUBERNETES_SERVICE_PORT=443
NGINX_SVC_SERVICE_HOST=10.106.136.231
KUBERNETES_PORT=tcp://10.96.0.1:443
HOSTNAME=test-env-po
NGINX_SVC_EXTERNAL_SERVICE_HOST=10.103.156.162
SHLVL=1
HOME=/root
JAVA_VM_OPTS=-Xms512m -Xmx512m  # 获取到configmap中的key-value
NGINX_SVC_SERVICE_PORT=80
NGINX_SVC_PORT=tcp://10.106.136.231:80
NGINX_SVC_EXTERNAL_PORT=tcp://10.103.156.162:80
NGINX_SVC_EXTERNAL_SERVICE_PORT=80
NGINX_SVC_SERVICE_PORT_WEB=80
NGINX_SVC_PORT_80_TCP_ADDR=10.106.136.231
APP=springboot-env-test
NGINX_SVC_PORT_80_TCP_PORT=80
NGINX_SVC_EXTERNAL_SERVICE_PORT_WEB=80
NGINX_SVC_EXTERNAL_PORT_80_TCP_ADDR=10.103.156.162
NGINX_SVC_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NGINX_SVC_EXTERNAL_PORT_80_TCP_PORT=80
KUBERNETES_PORT_443_TCP_PORT=443
NGINX_SVC_EXTERNAL_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PROTO=tcp
NGINX_SVC_PORT_80_TCP=tcp://10.106.136.231:80
NGINX_SVC_EXTERNAL_PORT_80_TCP=tcp://10.103.156.162:80
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/



2.将文件加载放到容器中
[root@k8s-master config]# cat file-test-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-configfile-po
spec:
  containers:
    - name: env-test
      image: alpine
      command: ["/bin/sh","-c","sleep 3600"]
      imagePullPolicy: IfNotPresent
      env:
      - name: JAVA_VM_OPTS # 本地自定义的名字
        valueFrom:
          configMapKeyRef:
            name: test-env-config # configmap的名字 表示需要从这个name中获取
            key: JAVA_OPTS_TEST # 表示从name 的configmap中获取名字为key的value,将器赋值给本地环境变量 JAVA_OPTS_TEST
      - name: APP
        valueFrom:
          configMapKeyRef:
            name: test-env-config
            key: APP_NAME
      volumeMounts: # 加载数据卷
      - name: db-config # 表示加载volume 属性中的哪个数据卷
        mountPath: "/usr/local/mysql/conf/" #想要将数据卷中的文件加载到哪个目录下
        readOnly: true # 是否只读
  volumes: # 数据卷挂载 configmap和secret是其中一种
    - name: db-config #指定数据卷的名字,随意设置
      configMap: # 数据卷类型为configMap
        name: test-dir-config # configMap的名字,必须跟想要加载的configmap相同
        items: #对configmap中的key进行映射,如果不指定,默认会将configmap中的所有key全部转换为一个个同名的文件
        - key: "db.properties" # configmap中的key   
          path: "db.properties" # 将该key的值转换为文件
  restartPolicy: Never


test-dir-config 中的内容:
[root@k8s-master config]# kubectl describe cm test-dir-config
Name:         test-dir-config
Namespace:    default
Labels:       <none>
Annotations:  <none>
Data
====
db.properties:  # 加载的文件
----
username=root  #加载到文件内容
password=admin
redis.properties:
----
host:127.0.0.1
port:6379
BinaryData
====
Events:  <none>
[root@k8s-master config]# 


kubectl create -f file-test-pod.yaml
[root@k8s-master config]# kubectl exec -it test-configfile-po -- sh  #进入到容器中
/ # cd /usr/local/mysql/conf/
/usr/local/mysql/conf # ls # 可以看到已经加载了。因为指定了item,那么就只会加载相应的item,其他的并不会加载
db.properties
/usr/local/mysql/conf # cat db.properties 
username=root
password=admin


2.secret

对配置加密

简单使用:
[root@k8s-master k8s]# kubectl create  secret docker-registry harbor-secret --docker-username=admin --docker-password=wolfcode --docker-email=298298298@qq.com
[root@k8s-master k8s]# kubectl describe secret harbor-secret
Name:         harbor-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>
Type:  kubernetes.io/dockerconfigjson
Data
====
.dockerconfigjson:  141 bytes
# 查看加密内容:
 kubectl edit secret harbor-secret
 将字符串可以直接负载粘贴出来,使用echo '字符串' |base64 --decode的方式可以将字符串解析出来。
 [root@k8s-master k8s]# echo 'eyJhdXRocyI6eyJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOnsidXNlcm5hbWUiOiJhZG1pbiIsInBhc3N3b3JkIjoid29sZmNvZGUiLCJlbWFpbCI6IjI5ODI5ODI5OEBxcS5jb20iLCJhdXRoIjoiWVdSdGFXNDZkMjlzWm1OdlpHVT0ifX19' |base64 --decode
{"auths":{"https://index.docker.io/v1/":{"username":"admin","password":"wolfcode","email":"298298298@qq.com","auth":"YWRtaW46d29sZmNvZGU="}}}


2 对仓库进行加密(需要harbor-secret才能拉取镜像)
  kubectl create  secret docker-registry harbor-secret --docker-username=admin --docker-password=wolfcode --docker-email=298298298@qq.com  --docker-server=192.168.xxxx:8858(这个是docker仓库的地址)

dockers images 将镜像打包传送到harbor中
dockers tag nginx:1.9.1 192.168.xxx:8858/opensource/nginx:1.9.1 将本地镜像nginx打包成为192.168.xxx:8858/opensource/nginx。
docker login -uadmin 192.168.xxx:8858 然后输入密码
docker push 192.168.xxx:8858/opensource/nginx:1.9.1 


apiVersion: v1
kind: Pod
metadata:
  name: private-image-pull-pod
spec:
  imagePullSecrets: # 配置登录docker registry的 secret
  - name: harbor-secret  # 配置加密
  containers:
    - name: nginx
      image: 192.168.xxx:8858/opensource/nginx:1.9.1
      command: ["/bin/sh","-c","sleep 3600"]
      imagePullPolicy: IfNotPresent
      env:
      - name: JAVA_VM_OPTS # 本地自定义的名字
        valueFrom:
          configMapKeyRef:
            name: test-env-config # configmap的名字 表示需要从这个name中获取
            key: JAVA_OPTS_TEST # 表示从name 的configmap中获取名字为key的value,将器赋值给本地环境变量 JAVA_OPTS_TEST
      - name: APP
        valueFrom:
          configMapKeyRef:
            name: test-env-config
            key: APP_NAME
      volumeMounts: # 加载数据卷
      - name: db-config # 表示加载volume 属性中的哪个数据卷
        mountPath: "/usr/local/mysql/conf/" #想要将数据卷中的文件加载到哪个目录下
        readOnly: true # 是否只读
  volumes: # 数据卷挂载 configmap和secret是其中一种
    - name: db-config #指定数据卷的名字,随意设置
      configMap: # 数据卷类型为configMap
        name: test-dir-config # configMap的名字,必须跟想要加载的configmap相同
        items: #对configmap中的key进行映射,如果不指定,默认会将configmap中的所有key全部转换为一个个同名的文件
        - key: "db.properties" # configmap中的key
          path: "db.properties" # 将该key的值转换为文件
  restartPolicy: Never

3.subpath

使用 ConfigMap 或 Secret 挂载到目录的时候,会将容器中源目录给覆盖掉,此时我们可能只想覆盖目录中的某一个文件,但是这样的操作会覆盖整个文件,因此需要使用到 SubPath.

[root@k8s-master config]# kubectl create configmap nginx-conf-cm --from-file=./nginx.conf # 当前目录下有一个nginx.conf
configmap/nginx-conf-cm created

[root@k8s-master config]# kubectl describe  cm nginx-conf-cm
Name:         nginx-conf-cm
Namespace:    default
Labels:       <none>
Annotations:  <none>
Data
====
nginx.conf:
----
user  nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
events {
    worker_connections  1024;
}
http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile        on;
    #tcp_nopush     on;
    keepalive_timeout  65;
    #gzip  on;
    include /etc/nginx/conf.d/*.conf;
}


BinaryData
====

Events:  <none>



spec:
      containers:
      - command:
        - /bin/sh
        - -c
        - nginx daemonoff;sleep 3600
        image: nginx:1.7.9
        imagePullPolicy: IfNotPresent
        name: nginx
        resources:
          limits:
            cpu: 200m
            memory: 128Mi
          requests:
            cpu: 10m
            memory: 128Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts: 
        - mountPath: /etc/nginx/ # 绑定的路径。会将整个 /etc/nginx 目录替换为 ConfigMap 的内容
          name: nginx-conf # 和volumes中的相呼应
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap: # 添加configmap
          defaultMode: 420
          items:
          - key: nginx.conf #使用的是cm中的nginx-conf-cm
            path: nginx.conf
          name: nginx-conf-cm
        name: nginx-conf
        
# 未绑定configmap时nginx.conf文件目录下的东西        
# cd nginx
# ls
conf.d	fastcgi_params	koi-utf  koi-win  mime.types  nginx.conf  scgi_params  uwsgi_params  win-utf
# cd conf.d
# ls
default.conf  example_ssl.conf


[root@k8s-master config]# kubectl exec -it nginx-deploy-59cd676fd7-7gvz8 -- sh
# cd /etc/nginx
# ls
nginx.conf
# cat nginx.conf  # 发现就只有这个nginx.conf,其余的文件没了   


# 配置subpath,这样就不会被覆盖
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx-deploy
    spec:
      containers:
      - command:
        - /bin/sh
        - -c
        - nginx daemonoff;sleep 3600
        image: nginx:1.7.9
        imagePullPolicy: IfNotPresent
        name: nginx
        resources:
          limits:
            cpu: 200m
            memory: 128Mi
          requests:
            cpu: 10m
            memory: 128Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/nginx/nginx.conf
          name: nginx-conf
          subPath: etc/nginx/nginx.conf # 添加了这一部分, 只使用 ConfigMap 中的这个键
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          items:
          - key: nginx.conf
            path: etc/nginx/nginx.conf
          name: nginx-conf-cm
        name: nginx-conf

4 configmap热加载

法1 软链接

我们通常会将项目的配置文件作为 configmap 然后挂载到 pod,那么如果更新 configmap 中的配置,会不会更新到 pod 中呢?

这得分成几种情况:
默认方式:会更新,更新周期是更新时间 + 缓存时间
subPath:不会更新
变量形式:如果 pod 中的一个变量是从 configmap 或 secret 中得到,同样也是不会更新的

对于 subPath 的方式,我们可以取消 subPath 的使用,将配置文件挂载到一个不存在的目录,避免目录的覆盖,然后再利用软连接的形式,将该文件链接到目标位置

但是如果目标位置原本就有文件,可能无法创建软链接,此时可以基于前面讲过的 postStart 操作执行删除命令,将默认的吻技安删除即可

可以通过软链接的方式:
在这里插入图片描述

法2:通过edit命令

[root@k8s-master config]# kubectl edit cm test-dir-config
configmap/test-dir-config edited
[root@k8s-master config]# 
apiVersion: v1
data:
  db.properties: |
    username=root-111 # 编辑的内容
    password=admin-111 #编辑的内容
[root@k8s-master config]# kubectl exec -it test-configfile-po -- sh
/usr/local/mysql/conf # cat db.properties 
username=root
password=admin
/usr/local/mysql/conf # cat db.properties  #需要经过一段时间后,实现了加载
username=root-111
password=admin-111

法3 :replace替换

[root@k8s-master config]# kubectl create cm test-dir-config --from-file=./test/ --dry-run -o yaml # 输出yaml文件结果
W0618 15:35:31.711220   40437 helpers.go:598] --dry-run is deprecated and can be replaced with --dry-run=client.
apiVersion: v1
data:
  db.properties: |
    username=xiaoliu
    password=123456
  redis.properties: |
    host:127.0.0.1
    port:6379
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: test-dir-config
[root@k8s-master config]# kubectl create cm test-dir-config --from-file=./test/ --dry-run -o yaml |kubectl replace -f- #将yaml文件传入
W0618 15:37:28.254395   42030 helpers.go:598] --dry-run is deprecated and can be replaced with --dry-run=client.
configmap/test-dir-config replaced
(reverse-i-search)`kubectl e': ^Cbectl exec -it  nginx-po -- cat /inited
[root@k8s-master config]# kubectl exec -it test-configfile-po -- sh
/ # cd /usr/local/mysql/conf/
/usr/local/mysql/conf # ls
db.properties
/usr/local/mysql/conf # cat db.properties 
username=xiaoliu
password=123456
/usr/local/mysql/conf # 

将输出的yaml文件作为replace的输入
在这里插入图片描述

5.不可变的secret和configmap

对于一些敏感服务的配置文件,在线上有时是不允许修改的,此时在配置 configmap 时可以设置 immutable: true 来禁止修改。

通过edit cm来实现

kubectl edit cm test-dir-config
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
# configmaps "test-dir-config" was not valid:
# * data: Forbidden: field is immutable when `immutable` is set
#
apiVersion: v1
data:
  db.properties: |
    username=laoliu
    password=123456
  redis.properties: |
    host:127.0.0.1
    port:6379
immutable: true # 添加这一行,发现如果退出保存后,在进行edit时,会出现上面的提示: forbidden
kind: ConfigMap
metadata:
  creationTimestamp: "2025-06-18T02:05:45Z"
  name: test-dir-config
  namespace: default
  resourceVersion: "209207"
  uid: 50a45e16-b583-4bc4-a208-c9e1da395294

14.持久化存储

1.volumes

hostpath: 将节点上的文件或目录挂载到 Pod 上,此时该目录会变成持久化存储目录,即使 Pod 被删除后重启,也可以重新加载到该目录,该目录下的文件不会丢失

[root@k8s-master volumes]# ls
volume-test-pd.yaml
[root@k8s-master volumes]# cat volume-test-pd.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-volume-pd
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx-volume
    volumeMounts:
    - mountPath: /test-pd # 挂载到容器的哪个目录
      name: test-volume # 挂载哪个 volume
  volumes:
  - name: test-volume
    hostPath: # 与主机共享目录,加载主机中指定目录到容器中
      path: /data # 节点中的目录
      type: DirectoryOrCreate # 检查类型,文件夹。在挂载前对挂载目录做什么检查操作,有多种选项,默认为空字符串,不做任何检查


类型:
空字符串:默认类型,不做任何检查
DirectoryOrCreate:如果给定的 path 不存在,就创建一个 755 的空目录
Directory:这个目录必须存在
FileOrCreate:如果给定的文件不存在,则创建一个空文件,权限为 644
File:这个文件必须存在
Socket:UNIX 套接字,必须存在
CharDevice:字符设备,必须存在
BlockDevice:块设备,必须存在
类型:
空字符串:默认类型,不做任何检查
DirectoryOrCreate:如果给定的 path 不存在,就创建一个 755 的空目录
Directory:这个目录必须存在
FileOrCreate:如果给定的文件不存在,则创建一个空文件,权限为 644
File:这个文件必须存在
Socket:UNIX 套接字,必须存在
CharDevice:字符设备,必须存在
BlockDevice:块设备,必须存在



[root@k8s-master volumes]# kubectl create -f volume-test-pd.yaml 
pod/test-volume-pd created
[root@k8s-master volumes]# kubectl exec -it test-volume-pd -- sh
发现这个pod部署到了k8s-node1中,并且新建了一个/data目录,无论时在容器中还是在主机中修改,都能发生改变

2.EmptyDir

为了一个pod中两个容器,方便管理两个容器。

EmptyDir 主要用于一个 Pod 中不同的 Container 共享数据使用的,由于只是在 Pod 内部使用,因此与其他 volume 比较大的区别是,当 Pod 如果被删除了,那么 emptyDir 也会被删除

存储介质可以是任意类型,如 SSD、磁盘或网络存储。可以将 emptyDir.medium 设置为 Memory 让 k8s 使用 tmpfs(内存支持文件系统),速度比较快,但是重启 tmpfs 节点时,数据会被清除,且设置的大小会计入到 Container 的内存限制中。
在这里插入图片描述

[root@k8s-master volumes]# cat empty-dir-pd.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: empty-dir-pd
spec:
  containers:
  - image: alpine
    imagePullPolicy: IfNotPresent
    name: nginx-emptydir1
    command: ["/bin/sh","-c","sleep 3600;"]
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  - image: alpine
    imagePullPolicy: IfNotPresent
    name: nginx-emptydir2
    command: ["/bin/sh","-c","sleep 3600;"]
    volumeMounts:
    - mountPath: /opt
      name: cache-volume

  volumes:
  - name: cache-volume
    emptyDir: {}


kubectl create -f  empty-dir-pd.yaml  # 创建

在一个pod中创建两个容器,容器nginx-emptydir1 的目录 /cache目录和 nginx-emptydir2 的目录 /opt目录相互共享。


容器2:kubectl exec -it empty-dir-pd -c nginx-emptydir2 -- sh
/opt # echo '123456' >> a.txt 
/opt # 
容器1:kubectl exec -it empty-dir-pd -c nginx-emptydir1 -- sh
/cache # cat a.txt 
123456
/cache # exit

3.nfs挂载

nfs 卷能将 NFS (网络文件系统) 挂载到你的 Pod 中。 不像 emptyDir 那样会在删除 Pod 的同时也会被删除,nfs 卷的内容在删除 Pod 时会被保存,卷只是被卸载。 这意味着 nfs 卷可以被预先填充数据,并且这些数据可以在 Pod 之间共享。

从k8s-node1中新建
k8s-node1中的操作:
yum install nfs-utils -y
systemctl start nfs-server
 cd /home/
 mkidr rw ro
vim /etc/exports
[root@k8s-node1 rw]# cat /etc/exports
/home/nfs/rw 192.168.157.0/24(rw,sync,no_subtree_check,no_root_squash)
/home/nfs/ro 192.168.157.0/24(ro,sync,no_subtree_check,no_root_squash)

exportfs -f
systemctl restart nfs-server
vim README.md
hello nfs


k8s-master加入进来
[root@k8s-master mnt]# mkdir /mnt/nfs/rw -p
[root@k8s-master mnt]# mkdir /mnt/nfs/ro -p
[root@k8s-master mnt]# mount -t nfs 192.168.157.132:/home/nfs/rw /mnt/nfs/rw
[root@k8s-master mnt]# mount -t nfs 192.168.157.132:/home/nfs/ro /mnt/nfs/ro
[root@k8s-master mnt]# cd /mnt/nfs/
[root@k8s-master nfs]# ls
ro  rw
[root@k8s-master nfs]# cd ro/
[root@k8s-master ro]# ls
README.md
[root@k8s-master ro]# cat README.md 
hell nfs
[root@k8s-master ro]# touch master
touch: 无法创建"master": 只读文件系统
[root@k8s-master ro]# cd ../rw
[root@k8s-master rw]# ls
[root@k8s-master rw]# touch master  # node-1节点中也会有这个文件
[root@k8s-master rw]# 

容器共享nfs目录:

[root@k8s-master volumes]# cat nfs-test-pd.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nfs-test-pd2
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: test-container
    volumeMounts:
    - mountPath: /usr/share/nginx/html # /home/nfs/rw/www/wolfcode共享
      name: test-volume
  volumes:
  - name: test-volume
    nfs:
      server: 192.168.157.132 # 网络存储服务地址 k8s-node1节点
      path: /home/nfs/rw/www/wolfcode # 网络存储路径 共享192.168.157.132的这个目录
      readOnly: false # 是否只读


192.168.157.132 node1中
echo '<h1>welocme to china</h1>' > /home/nfs/rw/www/wolfcode/index.html

kubectl create -f nfs-test-pd.yaml
[root@k8s-master volumes]# kubectl get po -o wide
nfs-test-pd1                    1/1     Running     0             2m57s   10.244.36.67     k8s-node1   <none>           <none>
[root@k8s-master volumes]# curl 10.244.36.67  # 可以发现能够实现共享
<h1>welocme to china</h1>

15.pv和pvc

PersistentVolume持久卷 PersistentVolumeClaim:持久卷申领

大致理解:
在这里插入图片描述

1 回收策略

retain(保留):当 PersistentVolumeClaim 对象被删除时,PersistentVolume 卷仍然存在,对应的数据卷被视为"已释放(released)"。 由于卷上仍然存在这前一申领人的数据,该卷还不能用于其他申领。 管理员可以通过下面的步骤来手动回收该卷:

delete(删除):删除动作会将 PersistentVolume 对象从 Kubernetes 中移除,同时也会从外部基础设施(如 AWS EBS、GCE PD、Azure Disk 或 Cinder 卷)中移除所关联的存储资产。 动态制备的卷会继承其 StorageClass 中设置的回收策略, 该策略默认为 Delete。管理员需要根据用户的期望来配置 StorageClass; 否则 PV 卷被创建之后必须要被编辑或者修补。

recycle(回收):

  1. Retain(保留)
    • 默认策略
    • 当 PVC 被删除后,PV 不会被自动删除
    • PV 状态变为 “Released”,但数据仍然保留
    • 管理员需要手动清理数据和 PV
  2. Delete(删除)
    • 当 PVC 被删除后,PV 和底层存储(如 AWS EBS、GCE PD 等)都会被自动删除
    • 数据将永久丢失
    • 适用于动态供应的 PV
  3. Recycle(回收)(已弃用)
    • 曾经用于通过删除卷内容(如执行 rm -rf /volume/*)来重新使用 PV
    • 从 Kubernetes 1.15 开始已被弃用,推荐使用动态供应代替

2 访问模式

ReadWriteOnce (RWO)

解释

  • 卷可以被单个节点以读写方式挂载

  • 同一时间只能被一个 Pod 使用(即使在同一节点上)

  • 最常见的访问模式

ReadOnlyMany (ROX)

解释

  • 卷可以被多个节点以只读方式挂载

  • 多个 Pod 可以同时读取相同数据

  • 不允许任何 Pod 写入

ReadWriteMany (RWX)

解释

  • 卷可以被多个节点以读写方式挂载
  • 多个 Pod 可以同时读写相同卷
  • 需要存储系统支持并发访问控制

3.pod绑定pv/pvc

大致流程:创建pv,创建pvc并且绑定pv,创建pod,将pod和pvc绑定

# 制定pv
[root@k8s-master volumes]# cat  pv-nfs.yaml 
apiVersion: v1
kind: PersistentVolume # 描述资源对象为pv类型
metadata:
  name: pv0001 # pv的名字
spec:
  capacity: #容量配置
    storage: 5Gi # pv 的容量
  volumeMode: Filesystem # 存储类型为文件系统
  accessModes: # 访问模式:ReadWriteOnce(pv只能被一个pvc使用)、ReadWriteMany(pv能被多个使用)、ReadOnlyMany
    - ReadWriteMany # 可被单节点独写
  persistentVolumeReclaimPolicy: Retain # 回收策略 -- 保留
  storageClassName: slow # 创建 PV 的存储类名,需要与 pvc 的相同
  mountOptions: # 加载配置
    - hard
    - nfsvers=4.1
  nfs: # 连接到 nfs
    path: /opt/nfs/rw/test-pv # 存储路径
    server: 192.168.157.132 # nfs 服务地址
[root@k8s-master volumes]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
pv0001   5Gi        RWX            Retain           Bound    default/nfs-pvc   slow                    36m
[root@k8s-master volumes]# 

# 制定pvc
[root@k8s-master volumes]# cat pvc-test.yaml 
apiVersion: v1
kind: PersistentVolumeClaim # 资源类型
metadata:
  name: nfs-pvc  #
spec:
  accessModes:
    - ReadWriteMany # 权限需要与对应的 pv 相同
  volumeMode: Filesystem
  resources:
    requests:
      storage: 5Gi # 资源可以小于 pv 的,但是不能大于,如果大于就会匹配不到 pv
  storageClassName: slow # 名字需要与对应的 pv 相同
#  selector: # 使用选择器选择对应的 pv
#  #    matchLabels:
#  #      release: "stable"
#  #    matchExpressions:
#  #      - {key: environment, operator: In, values: [dev]}
[root@k8s-master volumes]# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Bound    pv0001   5Gi        RWX            slow           28m
[root@k8s-master volumes]# 

# 将pod绑定到pvc中
[root@k8s-master volumes]# cat pvc-test-pd.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-pvc-pd
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx-volume
    volumeMounts:
    - mountPath: /usr/share/nginx/html # 挂载到容器的哪个目录
      name: test-volume # 挂载哪个 volume
  volumes:
  - name: test-volume
    persistentVolumeClaim:
      claimName: nfs-pvc

通过kubectl describe po test-pvc-pd查看

10.244.169.132为 pod的ip地址
[root@k8s-master volumes]# curl 10.244.169.132  # 此时还没有绑定index.html
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.5</center>
</body>
</html>


在k8s-node1中的/usr/share/nginx/html 目录下添加index.html文件内容为inited....
[root@k8s-master volumes]# curl 10.244.169.132 #再次输出的结果
inited....

4.storageclass动态制备pv–sc

动态卷供应(核心功能)

  • 自动创建PV:当PVC请求特定StorageClass且无可用PV时
  • 工作流程
    1. 用户创建PVC并指定StorageClass
    2. StorageClass的provisioner组件检测到请求
    3. 在存储后端创建实际存储资源
    4. 自动创建对应的PV对象并绑定到PVC
    5. 每个 StorageClass 都有一个制备器(Provisioner),用来决定使用哪个卷插件制备 PV。
      在这里插入图片描述

16.crontab-计划任务

分 时 日 月 周

脚本内容:

[root@k8s-master jobs]# cat cron-job-pd.yaml 
apiVersion: batch/v1
kind: CronJob
metadata:
  name: cron-job-test
spec:
  concurrencyPolicy: Allow # 并发调度策略:Allow 允许并发调度,Forbid:不允许并发执行,Replace:如果之前的任务还没执行完,就直接执行新的,放弃上一个任务
  failedJobsHistoryLimit: 1 # 保留多少个失败的任务
  successfulJobsHistoryLimit: 3 # 保留多少个成功的任务
  suspend: false # 是否挂起任务,若为 true 则该任务不会执行
#  startingDeadlineSeconds: 30 # 间隔多长时间检测失败的任务并重新执行,时间不能小于 10
  schedule: "* * * * *" # 调度策略
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: busybox
            image: busybox:1.28
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure
          
[root@k8s-master jobs]# kubectl describe cj cron-job-test  # 查看到相应的cron job
[root@k8s-master jobs]# kubectl get cj
NAME            SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cron-job-test   * * * * *   False     0        <none>          21s
[root@k8s-master jobs]# kubectl get po
NAME                            READY   STATUS      RESTARTS       AGE
cron-job-test-29171931-bjvt9    0/1     Completed   0              2m50s
cron-job-test-29171932-99rbq    0/1     Completed   0              110s
cron-job-test-29171933-bl82l    0/1     Completed   0              50s
[root@k8s-master jobs]# kubectl  logs -f cron-job-test-29171933-bl82l 
Thu Jun 19 06:53:01 UTC 2025
Hello from the Kubernetes cluster
[root@k8s-master jobs]# kubectl delete  cronjob cron-job-test # 删除任务
cronjob.batch "cron-job-test" deleted

17.生命周期

在这里插入图片描述

初始化容器:初始化阶段

在真正的容器启动之前,先启动 InitContainer,在初始化容器中完成真实容器所需的初始化操作,完成后再启动真实的容器。

相对于 postStart 来说,首先 InitController 能够保证一定在 EntryPoint 之前执行,而 postStart 不能,其次 postStart 更适合去执行一些命令操作,而 InitController 实际就是一个容器,可以在其他基础容器环境下执行更复杂的初始化功能。

在 pod 创建的模板中配置 initContainers 参数:
spec:
  initContainers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    command: ["sh", "-c", "echo 'inited;' >> ~/.init"]
    name: init-test

18.污点和容忍

污点:节点中。容忍:pod中

当某种条件为真时,节点控制器会自动给节点添加一个污点。当前内置的污点包括:

  • node.kubernetes.io/not-ready:节点未准备好。这相当于节点状况 Ready 的值为 “False”。
  • node.kubernetes.io/unreachable:节点控制器访问不到节点. 这相当于节点状况 Ready 的值为 “Unknown”。
  • node.kubernetes.io/memory-pressure:节点存在内存压力。
  • node.kubernetes.io/disk-pressure:节点存在磁盘压力。
  • node.kubernetes.io/pid-pressure:节点的 PID 压力。
  • node.kubernetes.io/network-unavailable:节点网络不可用。
  • node.kubernetes.io/unschedulable:节点不可调度。
  • node.cloudprovider.kubernetes.io/uninitialized:如果 kubelet 启动时指定了一个“外部”云平台驱动, 它将给当前节点添加一个污点将其标志为不可用。在 cloud-controller-manager 的一个控制器初始化这个节点后,kubelet 将删除这个污点。

1.污点(Taint):

Noschedule:如果不能容忍该污点,那么 Pod 就无法调度到该节点上

NoExecute:

  • 如果 Pod 不能忍受这类污点,Pod 会马上被驱逐。

  • 如果 Pod 能够忍受这类污点,但是在容忍度定义中没有指定 tolerationSeconds, 则 Pod 还会一直在这个节点上运行。

  • 如果 Pod 能够忍受这类污点,而且指定了 tolerationSeconds, 则 Pod 还能在这个节点上继续运行这个指定的时间长度。

    给node1打上NoSchedule的标签:
    kubectl taint nodes <node-name> <key>=<value>:<effect>
    [root@k8s-master ~]# kubectl taint node k8s-node1  memory=low:NoSchedule
    node/k8s-node1 tainted
    [root@k8s-master ~]# kubectl describe no k8s-node1
       Taints:             memory=low:NoSchedule
    [root@k8s-master ~]# kubectl describe no k8s-node2
       Taints:             <none>
    [root@k8s-master ~]# kubectl get po -o wide
    nginx-deploy-6799f9b9b7-vp44c   1/1     Running     0              59m     10.244.36.76     k8s-node1   <none>           <none>
    nginx-deploy-6799f9b9b7-zffnw   1/1     Running     0              59m     10.244.169.134   k8s-node2   <none>           <none>
    NoSchedule策略下并不会驱逐其他的Pod
    [root@k8s-master ~]# kubectl describe no k8s-master
    Taints:             node-role.kubernetes.io/master:NoSchedule
    [root@k8s-master ~]# kubectl taint no k8s-master node-role.kubernetes.io/master:NoSchedule-  # 删除这个污点
    [root@k8s-master ~]# kubectl delete po nginx-deploy-6799f9b9b7-vp44c nginx-deploy-6799f9b9b7-zffnw  #假设此时删除这两个pod
    pod "nginx-deploy-6799f9b9b7-vp44c" deleted
    pod "nginx-deploy-6799f9b9b7-zffnw" deleted
    [root@k8s-master ~]# kubectl get po -o wide #会发现此时这两个pod会到没有污点的node上(这两个Nginx-pod是使用deploy启动的,pod删除后任然会再建)
    nginx-deploy-6799f9b9b7-7hpgf   1/1     Running     0              88s     10.244.235.220   k8s-master   <none>           <none>
    nginx-deploy-6799f9b9b7-vx77t   1/1     Running     0              88s     10.244.169.135   k8s-node2    <none>           <none>
    [root@k8s-master ~]# kubectl taint no k8s-master node-role.kubernetes.io/master:NoExecute  #如果给NoExecute,那么不匹配的Pod将会被驱逐,发现都到node2中了
    [root@k8s-master ~]# kubectl get po  -o wide
    nginx-deploy-6799f9b9b7-d2lz5   1/1     Running     0              31s     10.244.169.138   k8s-node2   <none>           <none>
    nginx-deploy-6799f9b9b7-vx77t   1/1     Running     0              3m57s   10.244.169.135   k8s-node2   <none>           <none>
    # 恢复原样
    [root@k8s-master ~]# kubectl taint no k8s-master node-role.kubernetes.io/master:NoExecute-
    node/k8s-master untainted
    [root@k8s-master ~]# kubectl taint no k8s-master node-role.kubernetes.io/master:NoSchedule
    node/k8s-master tainted
    
    

2.容忍(Toleration)

是标注在 pod 上的,当 pod 被调度时,如果没有配置容忍,则该 pod 不会被调度到有污点的节点上,只有该 pod 上标注了满足某个节点的所有污点,则会被调度到这些节点

Equal:比较操作类型为 Equal,则意味着必须与污点值做匹配,key/value都必须相同,才表示能够容忍该污点(key和value在pod和节点中都含有)

Exits:容忍与污点的比较只比较 key,不比较 value,不关心 value 是什么东西,只要 key 存在,就表示可以容忍。

第一种:Equal
kubectl edit deploy nginx-deploy
spec:
      tolerations:
        - key: "memory"
          operator: "Equal"
          value: "low"
          effect: "NoSchedule"
      containers:
      - command:
        - /bin/sh
        - -c
        - nginx daemonoff;sleep 3600
发现打了污点的节点也可以安装pod了,使用的是Equal,key和value必须要强匹配,pod和node都需要相同
[root@k8s-master ~]# kubectl get po -o wide|grep nginx
nginx-deploy-5995778447-4trvv   1/1     Running     0               91s     10.244.169.142   k8s-node2   <none>           <none>
nginx-deploy-5995778447-m62fk   1/1     Running     0               105s    10.244.36.77     k8s-node1   <none>           <none>

kubectl edit deploy nginx-deploy
第二种:Exists:

      tolerations:
      - effect: NoSchedule
        key: memory
        operator: Exists
只需要key匹配就可以了,不需要value

将k8s-node1中设置NoExecute。那么所哟没有配置容忍的容器都不会在这
[root@k8s-master k8s]# kubectl get po -o wide
NAME                            READY   STATUS      RESTARTS       AGE    IP               NODE        NOMINATED NODE   READINESS GATES
dns-test                        1/1     Running     9 (22h ago)    13d    10.244.169.191   k8s-node2   <none>           <none>
empty-dir-pd                    2/2     Running     18 (21m ago)   24h    10.244.169.130   k8s-node2   <none>           <none>
fluentd-9wn9x                   1/1     Running     8 (22h ago)    12d    10.244.169.129   k8s-node2   <none>           <none>
nginx-deploy-5d886bd55d-c9p76   1/1     Running     0              47s    10.244.169.144   k8s-node2   <none>           <none>
nginx-deploy-5d886bd55d-svhk5   1/1     Running     0              50s    10.244.36.79     k8s-node1   <none>           <none>
test-configfile-po              0/1     Completed   0              25h    10.244.169.188   k8s-node2   <none>           <none>
test-pvc-pd                     1/1     Running     0              6h6m   10.244.169.132   k8s-node2   <none>           <none>
[root@k8s-master k8s]# 

  其中一个在  k8s-node1中的nginx是因为配置了容忍
        tolerations:
      - effect: NoExecute
        key: memory
        operator: Exists
第三种情况:在noexecute中添加tolerationSeconds
kubectl edit deploy nginx-deploy
           tolerations:
      - effect: NoExecute
        key: memory
        operator: Exists
        tolerationSeconds: 30
那么就会出现一个Nginx的pod调来调去(在node1和node2中)

19.亲和力

通过节点的标签的,pod来选择。affinity亲和力的意思

NodeAffiinity : 节点亲和力:进行 pod 调度时,优先调度到符合条件的亲和力节点上

PodAffinity: Pod 亲和力:将与指定 pod 亲和力相匹配的 pod 部署在同一节点。

PodAntiAffinity : Pod 反亲和力:根据策略尽量部署或不部署到一块

RequiredDuringSchedulingIgnoredDuringExecution:必须满足(硬亲和力)

PreferredDuringSchedulingIgnoredDuringExecution:满足一些(软亲和力)

1.节点亲和性

大致可以下面:官方案例

apiVersion: v1
kind: Pod
metadata:
  name: with-node-affinity
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: topology.kubernetes.io/zone
            operator: In  # 首先,key是必须在node中含有的
            values:
            - antarctica-east1 # 下面的value只需要满足一个就可以了
            - antarctica-west1
      preferredDuringSchedulingIgnoredDuringExecution: # 第二重亲和,如果满足就去这个
      - weight: 1
        preference:
          matchExpressions:
          - key: another-node-label-key
            operator: In
            values:
            - another-node-label-value
  containers:
  - name: with-node-affinity
    image: registry.k8s.io/pause:2.0

案例:

匹配类型:

in:满足一个就行

NotIn:一个都不满足,不能在含有这个标签的节点/pod中调度

Exists:只要存在,就满足

DoesNotExist:只有不存在,才满足

Gt:必须要大于节点上的数值才满足。

Lt:必须小于节点上的数值才满足

# 给节点打标签
[root@k8s-master k8s]# kubectl label no k8s-node1 label-1=key-1
node/k8s-node1 labeled
[root@k8s-master k8s]# kubectl label no k8s-node2 label-2=key-2
node/k8s-node2 labeled

如果编辑kubectl edit deploy nginx-deploy
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - preference:
              matchExpressions:
              - key: label-1
                operator: In
                values:
                - key-1
            weight: 1
          - preference:
              matchExpressions:
              - key: label-2
                operator: NotIn #不能在含义这个key和label上面部署。如果是In,那么就是按照权重来调度 50/2
                values:
                - key-2
            weight: 50
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux

[root@k8s-master k8s]# kubectl get po -o wide|grep nginx  # 也就都调度到node1中了
nginx-deploy-945d8bd68-5n7c9   1/1     Running     0              14m   10.244.36.98     k8s-node1   <none>           <none>
nginx-deploy-945d8bd68-7rbvn   1/1     Running     0              14m   10.244.36.92     k8s-node1   <none>           <none>
[root@k8s-master k8s]# 

假设如果将master的Taint删除了,将nginx-deploy的pod删除后,自己创建pod后,pod可能会调度到master中,因为:master会忽略prefrence中的label。

2.pod亲和性

亲和性:当两个pod需要部署到同一个node.(一个pod根据另一个pod来部署

反亲和性:根据策略尽量部署或不部署到一块

https://kubernetes.io/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/

20.认证和鉴权

1.认证

所有 Kubernetes 集群有两类用户:由 Kubernetes 管理的Service Accounts (服务账户)和(Users Accounts) 普通账户。

普通账户是假定被外部或独立服务管理的,由管理员分配 keys,用户像使用 Keystone 或 google 账号一样,被存储在包含 usernames 和 passwords 的 list 的文件里。

需要注意:在 Kubernetes 中不能通过 API 调用将普通用户添加到集群中

  • 普通帐户是针对(人)用户的,服务账户针对 Pod 进程。
  • 普通帐户是全局性。在集群所有namespaces中,名称具有惟一性。
  • 通常,群集的普通帐户可以与企业数据库同步,新的普通帐户创建需要特殊权限。服务账户创建目的是更轻量化,允许集群用户为特定任务创建服务账户。
  • 普通帐户和服务账户的审核注意事项不同。
  • 对于复杂系统的配置包,可以包括对该系统的各种组件的服务账户的定义。

21.helm

1.简单介绍

helm管理chart(可以将chart理解成docker镜像)

Kubernetes 包管理器
Helm 是查找、分享和使用软件构件 Kubernetes 的最优方式。

Helm 管理名为 chart 的 Kubernetes 包的工具。Helm 可以做以下的事情:

  • 从头开始创建新的 chart

  • 将 chart 打包成归档(tgz)文件

  • 与存储 chart 的仓库进行交互

  • 在现有的 Kubernetes 集群中安装和卸载 chart

  • 管理与 Helm 一起安装的 chart 的发布周期

    对于Helm,有三个重要的概念:

  1. chart 创建Kubernetes应用程序所必需的一组信息。
  2. config 包含了可以合并到打包的chart中的配置信息,用于创建一个可发布的对象。
  3. release 是一个与特定配置相结合的chart的运行实例。(一个实例)
helm相关命令:
helm repo add <仓库名> <仓库URL>    # 添加仓库
helm repo list                   # 列出已添加的仓库
helm repo update                 # 更新仓库索引
helm repo remove <仓库名>         # 删除仓库
helm search repo <关键词>         # 从已添加仓库搜索
helm search hub <关键词>         # 从 Artifact Hub 搜索
helm install <发布名> <chart名>    # 安装 Chart
helm install <发布名> <chart路径>  # 从本地安装
helm uninstall <发布名>           # 卸载发布
helm list  
helm list --all-namespaces       # 查看所有命名空间的发布
helm list -n <命名空间>          # 查看指定命名空间的发布
helm create <chart名>            # 创建新 Chart
helm lint <chart路径>            # 检查 Chart 语法
helm package <chart路径>         # 打包 Chart
helm status <发布名>             # 查看发布状态

2.chart目录结构

mychart
├── Chart.yaml
├── charts # 该目录保存其他依赖的 chart(子 chart)
├── templates # chart 配置模板,用于渲染最终的 Kubernetes YAML 文件
│   ├── NOTES.txt # 用户运行 helm install 时候的提示信息
│   ├── _helpers.tpl # 用于创建模板时的帮助类
│   ├── deployment.yaml # Kubernetes deployment 配置
│   ├── ingress.yaml # Kubernetes ingress 配置
│   ├── service.yaml # Kubernetes service 配置
│   ├── serviceaccount.yaml # Kubernetes serviceaccount 配置
│   └── tests
│       └── test-connection.yaml
└── values.yaml # 定义 chart 模板中的自定义配置的默认值,可以在执行 helm install 或 helm update 的时候覆盖



3.安装redis集群

前提:部署好redis的pvc。
[root@k8s-master k8s]# kubectl get pvc
NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
auto-pv-test-pvc   Bound    pvc-29736f04-3bcb-44cc-8169-64a8305c8396   300Mi      RWO            managed-nfs-storage   2d15h
nfs-pvc            Bound    pv0001                                     5Gi        RWX            slow                  3d23h
[root@k8s-master k8s]# kubect^C
[root@k8s-master k8s]# 

[root@k8s-master ~]# kubectl get pvc -n redis
NAME                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
redis-data-redis-master-0     Bound    pvc-269800aa-cb4b-4281-ade3-26c1cd37df45   1Gi        RWO            managed-nfs-storage   2d15h
redis-data-redis-replicas-0   Bound    pvc-2ff13ca2-9939-4dac-8f7a-568f97054690   8Gi        RWO            managed-nfs-storage   2d15h
redis-data-redis-replicas-1   Bound    pvc-0f174498-bc7b-4263-9010-ce7d111b1d01   8Gi        RWO            managed-nfs-storage   2d14h
redis-data-redis-replicas-2   Bound    pvc-fd8fb7cd-803e-49f0-8f6a-646a577938bf   8Gi        RWO            managed-nfs-storage   2d14h

# 查看默认仓库
helm repo list
# 添加仓库
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add aliyun https://apphub.aliyuncs.com/stable
helm repo add azure http://mirror.azure.cn/kubernetes/charts

# 搜索 redis chart
helm search repo redis
# 查看安装说明
helm show readme bitnami/redis

cd /opt/k8s
# 先将 chart 拉到本地
helm install my-redis bitnami/redis --version 17.4.3
 

# 解压后,修改 values.yaml 中的参数
tar -xvf redis-17.4.3.tgz

# 修改 storageClass 为 managed-nfs-storage
# 设置 redis 密码 password
# 修改集群架构 architecture,默认是主从(replication,3个节点),可以修改为 standalone 单机模式
# 修改实例存储大小 persistence.size 为需要的大小
# 修改 service.nodePorts.redis 向外暴露端口,范围 <30000-32767>

# 安装操作
# 创建命名空间
kubectl create namespace redis

# 安装
cd ../
helm install redis ./redis -n redis
kubectl get all -n redis


3.redis升级和回滚

方法1:编辑valus.yaml文件
修改password为wolfcode123
[root@k8s-master k8s]# helm upgrade redis ./redis/ -n redis
Release "redis" has been upgraded. Happy Helming!
[root@k8s-master k8s]# kubectl get po -n redis  #可以发现是逐步替换的
NAME               READY   STATUS    RESTARTS        AGE
redis-master-0     0/1     Running   0               18s
redis-replicas-0   1/1     Running   2 (47m ago)     2d16h
redis-replicas-1   1/1     Running   1 (2d15h ago)   2d15h
redis-replicas-2   0/1     Running   0               19s
[root@k8s-master k8s]# kubectl exec -it redis-master-0 -n redis -- bash
I have no name!@redis-master-0:/$ redis-cli
127.0.0.1:6379> auth wolfcode123 
OK
127.0.0.1:6379> get name #之前存储的数据也在
"xiaoliu"
方法2:回滚
[root@k8s-master k8s]# helm history redis -n redis
REVISION	UPDATED                 	STATUS    	CHART       	APP VERSION	DESCRIPTION     
1       	Fri Jun 20 17:39:16 2025	superseded	redis-17.4.3	7.0.8      	Install complete
2       	Mon Jun 23 09:57:54 2025	deployed  	redis-17.4.3	7.0.8      	Upgrade complete
[root@k8s-master k8s]# helm rollback redis 1 -n redis
Rollback was a success! Happy Helming!
[root@k8s-master k8s]# kubectl get po -n redis
NAME               READY   STATUS    RESTARTS   AGE
redis-master-0     0/1     Running   0          4s
redis-replicas-0   1/1     Running   0          3m59s
redis-replicas-1   1/1     Running   0          4m25s
redis-replicas-2   0/1     Running   0          4s
[root@k8s-master k8s]# kubectl exec -it redis-master-0 -n redis -- bash 
I have no name!@redis-master-0:/$ redis-cli
127.0.0.1:6379> auth wolfcode  #回滚为第一个的
OK
127.0.0.1:6379> autho wolfcode123
(error) ERR unknown command 'autho', with args beginning with: 'wolfcode123' 
127.0.0.1:6379> auth wolfcode123
(error) WRONGPASS invalid username-password pair or user is disabled.
127.0.0.1:6379> exit

[root@k8s-master k8s]# kubectl get po -n redis
NAME               READY   STATUS    RESTARTS   AGE
redis-master-0     1/1     Running   0          2m36s
redis-replicas-0   1/1     Running   0          80s
redis-replicas-1   1/1     Running   0          105s
redis-replicas-2   1/1     Running   0          2m36s

[root@k8s-master k8s]# kubectl get pvc -n redis #删除pvc
NAME                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
redis-data-redis-master-0     Bound    pvc-269800aa-cb4b-4281-ade3-26c1cd37df45   1Gi        RWO            managed-nfs-storage   2d16h
redis-data-redis-replicas-0   Bound    pvc-2ff13ca2-9939-4dac-8f7a-568f97054690   8Gi        RWO            managed-nfs-storage   2d16h
redis-data-redis-replicas-1   Bound    pvc-0f174498-bc7b-4263-9010-ce7d111b1d01   8Gi        RWO            managed-nfs-storage   2d15h
redis-data-redis-replicas-2   Bound    pvc-fd8fb7cd-803e-49f0-8f6a-646a577938bf   8Gi        RWO            managed-nfs-storage   2d15h
[root@k8s-master k8s]# kubectl delete pvc redis-data-redis-master-0 redis-data-redis-replicas-0 redis-data-redis-replicas-1 redis-data-redis-replicas-2 -n redis
persistentvolumeclaim "redis-data-redis-master-0" deleted
persistentvolumeclaim "redis-data-redis-replicas-0" deleted
persistentvolumeclaim "redis-data-redis-replicas-1" deleted
persistentvolumeclaim "redis-data-redis-replicas-2" deleted

[root@k8s-master k8s]# kubectl get pv |grep redis
pvc-0f174498-bc7b-4263-9010-ce7d111b1d01   8Gi        RWO            Retain           Released   redis/redis-data-redis-replicas-1   managed-nfs-storage            2d15h
pvc-269800aa-cb4b-4281-ade3-26c1cd37df45   1Gi        RWO            Retain           Released   redis/redis-data-redis-master-0     managed-nfs-storage            2d15h
pvc-2ff13ca2-9939-4dac-8f7a-568f97054690   8Gi        RWO            Retain           Released   redis/redis-data-redis-replicas-0   managed-nfs-storage            2d15h
pvc-fd8fb7cd-803e-49f0-8f6a-646a577938bf   8Gi        RWO            Retain           Released   redis/redis-data-redis-replicas-2   managed-nfs-storage            2d15h
[root@k8s-master k8s]# kubectl delete pv pvc-0f174498-bc7b-4263-9010-ce7d111b1d01 pvc-269800aa-cb4b-4281-ade3-26c1cd37df45 pvc-2ff13ca2-9939-4dac-8f7a-568f97054690 pvc-fd8fb7cd-803e-49f0-8f6a-646a577938bf  #删除pv
persistentvolume "pvc-0f174498-bc7b-4263-9010-ce7d111b1d01" deleted
persistentvolume "pvc-269800aa-cb4b-4281-ade3-26c1cd37df45" deleted
persistentvolume "pvc-2ff13ca2-9939-4dac-8f7a-568f97054690" deleted
persistentvolume "pvc-fd8fb7cd-803e-49f0-8f6a-646a577938bf" deleted


# 查看历史
helm history redis
# 回退到上一版本
helm rollback redis
# 回退到指定版本
helm rollback redis 3
helm delete redis -n redis# helm卸载redis

22.prometheus监控

heapster/weave scope/prometheus – 有这些

cd /opt/k8s/
git clone --branch release-0.10 https://github.com/prometheus-operator/kube-prometheus.git
cd kube-prometheus
kubectl create -f manifests/setup/
kubectl apply -f manifest/
kubectl get all -n monitoring # 查看所有的monitoring
kubectl get po -n monitoring
NAME                                   READY   STATUS    RESTARTS        AGE
alertmanager-main-0                    2/2     Running   2 (16h ago)     18h
alertmanager-main-1                    2/2     Running   2 (16h ago)     18h
alertmanager-main-2                    2/2     Running   2 (16h ago)     18h
blackbox-exporter-6b79c4588b-84ztq     3/3     Running   3 (16h ago)     18h
grafana-7fd69887fb-gvsqm               1/1     Running   1 (16h ago)     18h
kube-state-metrics-55f67795cd-zh5c6    3/3     Running   3 (16h ago)     18h
node-exporter-jk2b7                    2/2     Running   2 (16h ago)     18h
node-exporter-kdzwz                    2/2     Running   2 (16h ago)     18h
node-exporter-r5gmj                    2/2     Running   2 (16h ago)     18h
prometheus-adapter-5565cc8d76-4xg57    1/1     Running   2 (8m38s ago)   18h
prometheus-adapter-5565cc8d76-mzg9d    1/1     Running   2 (8m37s ago)   18h
prometheus-k8s-0                       2/2     Running   2 (16h ago)     18h
prometheus-k8s-1                       2/2     Running   2 (16h ago)     18h
prometheus-operator-6dc9f66cb7-qsnbm   2/2     Running   2 (16h ago)     18h
[root@k8s-master manifests]# kubectl get svc -n monitoring
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-main       ClusterIP   10.98.163.168    <none>        9093/TCP,8080/TCP            18h
alertmanager-operated   ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   18h
blackbox-exporter       ClusterIP   10.98.111.242    <none>        9115/TCP,19115/TCP           18h
grafana                 ClusterIP   10.98.101.66     <none>        3000/TCP                     18h
kube-state-metrics      ClusterIP   None             <none>        8443/TCP,9443/TCP            18h
node-exporter           ClusterIP   None             <none>        9100/TCP                     18h
prometheus-adapter      ClusterIP   10.99.230.58     <none>        443/TCP                      18h
prometheus-k8s          ClusterIP   10.102.174.143   <none>        9090/TCP,8080/TCP            18h
prometheus-operated     ClusterIP   None             <none>        9090/TCP                     18h
prometheus-operator     ClusterIP   None             <none>        8443/TCP                     18h
[root@k8s-master manifests]# 

vim manifests/prometheus-ingress.yaml  # 此时无法直接访问。
[root@k8s-master manifests]# cat prometheus-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: monitoring
  name: prometheus-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: grafana.wolfcode.cn  # 访问 Grafana 域名
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: grafana
            port:
              number: 3000
  - host: prometheus.wolfcode.cn  # 访问 Prometheus 域名
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: prometheus-k8s 
            port:
              number: 9090
  - host: alertmanager.wolfcode.cn  # 访问 alertmanager 域名
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: alertmanager-main
            port:
              number: 9093
[root@k8s-master manifests]# 

kubectl apply -f  prometheus-ingress.yaml 
# 同时需要在本机windows机器中设置hosts
192.168.157.131 grafana.wolfcode.cn
192.168.157.131 prometheus.wolfcode.cn
192.168.157.131 alertmanager.wolfcode.cn
192.168.157.131 kibana.wolfcode.cn
192.168.157.131 kubesphere.wolfcode.cn

# 卸载
kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup


23.elk

逻辑图:

在这里插入图片描述

elk逻辑图:
在这里插入图片描述

25.配置kube-dashboard

[root@k8s-master elk]# kubectl describe serviceaccount dashboard-admin -n kubernetes-dashboard
Name:                dashboard-admin
Namespace:           kubernetes-dashboard
Labels:              k8s-app=kubernetes-dashboard
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   dashboard-admin-token-vlpk4
Tokens:              dashboard-admin-token-vlpk4
Events:              <none>
[root@k8s-master elk]# kubectl describe secrets dashboard-admin-token-vlpk4 -n kubernetes-dashboard
Name:         dashboard-admin-token-vlpk4
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 2c3ff74c-eff8-4ef9-9269-2e66b62a3d3c

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1099 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImttdmxwazzLmlvL3NlcnZpY2VhY2NvtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWTkyNjktMmU2NQ9V4BHVr8JKis6bwkII7H5mq7jcszxoAniQ_dvIMxciaJRA3w2xXgl5her6sbg_F4ElXA

26.kubesphere安装

内容:初次安装

#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.157.131:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2025-06-25 16:52:52
#####################################################

修改密码为 123456

[root@k8s-master kubesphere]# cat default-storage-class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local
  annotations:
    cas.openebs.io/config: |
      - name: StorageType
        value: "hostpath"
      - name: BasePath
        value: "/var/openebs/local/"
    openebs.io/cas-type: local
    storageclass.beta.kubernetes.io/is-default-class: "true"
    storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce"]'
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"cas.openebs.io/config":"- name: StorageType\n  value: \"hostpath\"\n- name: BasePath\n  value: \"/var/openebs/local/\"","openebs.io/cas-type":"local","storageclass.beta.kubernetes.io/is-default-class":"true","storageclass.kubesphere.io/supported-access-modes":"[\"ReadWriteOnce\"]"},"name":"local"},"provisioner":"openebs.io/local","reclaimPolicy":"Delete","volumeBindingMode":"WaitForFirstConsumer"}
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

kubectl patch users admin -p '{"spec":{"password":"123456"}}' --type='merge' && kubectl annotate users admin iam.kubesphere.io/password-encrypted-

27.ci/cd

在这里插入图片描述

28.harbor安装

wget https://github.com/goharbor/harbor/releases/download/v2.5.0/harbor-offline-installer-v2.5.0.tgz
下载后解压
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose  #版本要求
cp一份harbor.yml.文件修改:
hostname: 192.168.157.132
http:
  port: 8857
注释掉https:
# https related config
#https:
  # https port for harbor, default is 443
#  port: 443
  # The path of cert and key files for nginx
#  certificate: /your/certificate/path
#  private_key: /your/private/key/path
修改登录密码:harbor_admin_password: wolfcode
配置存储目录:data_volume: /data1


在服务器中配置密码:
kubectl create secret docker-registry harbor-secret --docker-server=192.168.157.132:8857 --docker-username=admin --docker-password=wolfcode -n kube-devops

29 sonarqube 安装

[root@k8s-master sonarqube]# cat pgsql.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-data
  namespace: kube-devops
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: "managed-nfs-storage"
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres-sonar
  namespace: kube-devops
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres-sonar
  template:
    metadata:
      labels:
        app: postgres-sonar
    spec:
      containers:
      - name: postgres-sonar
        image: postgres:14.2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 5432
        env:
        - name: POSTGRES_DB
          value: "sonarDB"
        - name: POSTGRES_USER
          value: "sonarUser"
        - name: POSTGRES_PASSWORD 
          value: "123456"
        volumeMounts:
          - name: data
            mountPath: /var/lib/postgresql/data
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: postgres-data
---
apiVersion: v1
kind: Service
metadata:
  name: postgres-sonar
  namespace: kube-devops
  labels:
    app: postgres-sonar
spec:
  type: NodePort
  ports:
  - name: postgres-sonar
    port: 5432
    targetPort: 5432
    protocol: TCP
  selector:
    app: postgres-sonar


[root@k8s-master sonarqube]# cat sonarqube.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sonarqube-data
  namespace: kube-devops
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: "managed-nfs-storage"
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sonarqube
  namespace: kube-devops
  labels:
    app: sonarqube
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sonarqube
  template:
    metadata:
      labels:
        app: sonarqube
    spec:
      initContainers:
      - name: init-sysctl
        image: busybox:1.28.4
        imagePullPolicy: IfNotPresent
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      containers:
      - name: sonarqube
        image: sonarqube:9.9-community
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9000
        env:
        - name: SONARQUBE_JDBC_USERNAME
          value: "sonarUser"
        - name: SONARQUBE_JDBC_PASSWORD
          value: "123456"
        - name: SONARQUBE_JDBC_URL
          value: "jdbc:postgresql://postgres-sonar:5432/sonarDB"
        livenessProbe:
          httpGet:
            path: /sessions/new
            port: 9000
          initialDelaySeconds: 60
          periodSeconds: 30
        readinessProbe:
          httpGet:
            path: /sessions/new
            port: 9000
          initialDelaySeconds: 60
          periodSeconds: 30
          failureThreshold: 6
        volumeMounts:
        - mountPath: /opt/sonarqube/conf
          name: data
        - mountPath: /opt/sonarqube/data
          name: data
        - mountPath: /opt/sonarqube/extensions
          name: data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: sonarqube-data 
---
apiVersion: v1
kind: Service
metadata:
  name: sonarqube
  namespace: kube-devops
  labels:
    app: sonarqube
spec:
  type: NodePort
  ports:
  - name: sonarqube
    port: 9000
    targetPort: 9000
    protocol: TCP
  selector:
    app: sonarqube

kubectl get po -n kube-devops
kubectl get svc -n kube-devops

[root@k8s-master sonarqube]# kubectl get svc -n kube-devops
NAME             TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
postgres-sonar   NodePort   10.110.242.64   <none>        5432:30217/TCP   4m7s
sonarqube        NodePort   10.104.21.94    <none>        9000:30326/TCP   4m5s
[root@k8s-master sonarqube]# 


访问;ip:30326
用户名:admin
密码:123456
 
可以进行汉化

相关配置文件:

https://blog.csdn.net/jy02268879/article/details/141941660

30.jenkins

[root@k8s-master jenkins]# cat Dockerfile 
FROM jenkins/jenkins:2.462.2-jdk11
ADD ./apache-maven-3.9.0-bin.tar.gz /usr/local/
ADD ./sonar-scanner-cli-4.8.0.2856-linux.zip /usr/local/
USER root
RUN apt-get update && apt-get install -y unzip
WORKDIR /usr/local/
RUN unzip sonar-scanner-cli-4.8.0.2856-linux.zip

ENV MAVEN_HOME=/usr/local/apache-maven-3.9.0
ENV PATH=$JAVA_HOME/bin:$MAVEN_HOME/bin:$PATH
 
USER root
 
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
USER jenkins
[root@k8s-master jenkins]# 

 1138  docker build -t jenkins-maven:v1 . 构建镜像
 1139  docker images
 1140  docker images|grep jenkins 
 1141  docker tag jenkins-maven:v1  192.168.157.132:8857/wolfcode/jenkins-maven:v1  #打标签
 1142  docker login -uadmin 192.168.157.132:8857 
 1143  docker push 192.168.157.132:8857/wolfcode/jenkins-maven:v1 #推送镜像



通过manifest/文件中的yaml文件创建
915084b4c84dc9b0b 密码
[root@k8s-master jenkins]# kubectl get po -n kube-devops
NAME                             READY   STATUS    RESTARTS   AGE
jenkins-777787d7b8-mzs6r         1/1     Running   0          116s
postgres-sonar-f644b4fbd-fpspr   1/1     Running   0          122m
sonarqube-8456c56d59-pzxxc       1/1     Running   0          122m

[root@k8s-master jenkins]# kubectl get svc -n kube-devops
NAME              TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
jenkins-service   NodePort   10.106.219.232   <none>        8080:31620/TCP   2m28s


jenkins密码:
http://192.168.157.131:31620/user/admin/
admin wolfcode

31.项目部署

redis

/opt/k8s/devops/microservices/basics/redis
密码:wolfcode123
[root@k8s-master basics]# helm install redis ./redis/ -n redis
NAME: redis
LAST DEPLOYED: Wed Jul 16 14:39:43 2025
NAMESPACE: redis
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: redis
CHART VERSION: 17.4.3
APP VERSION: 7.0.8

[root@k8s-master basics]# kubectl get po -n redis
NAME               READY   STATUS    RESTARTS   AGE
redis-master-0     1/1     Running   0          2m7s
redis-replicas-0   1/1     Running   0          2m7s
redis-replicas-1   1/1     Running   0          90s
redis-replicas-2   1/1     Running   0          62s
[root@k8s-master basics]# kubectl get svc -n redis
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
redis-headless   ClusterIP   None             <none>        6379/TCP         2m39s
redis-master     NodePort    10.111.149.215   <none>        6379:32718/TCP   2m39s
redis-replicas   ClusterIP   10.97.12.3       <none>        6379/TCP         2m39s
[root@k8s-master basics]# kubectl exec -it redis-replicas-0 -n redis -- bash



rocketmq

[root@k8s-master examples]# pwd
/opt/k8s/devops/microservices/basics/rocketmq/examples
[root@k8s-master examples]# ls
dev.yaml  test.yaml
[root@k8s-master examples]# 

helm -n rocketmq install rocketmq -f examples/dev.yaml charts/rocketmq/

 kubectl edit svc ingress-nginx-controller -n ingress-nginx 修改类型:类型为NodePort
 
 192.168.157.131:8080

nacos

http://192.168.157.131:31474/nacos/#/login 登录

默认用户名密码:nacos nacos


网站公告

今日签到

点亮在社区的每一天
去签到