K8s调度器扩展(scheduler)

发布于:2024-11-28 ⋅ 阅读:(35) ⋅ 点赞:(0)

1.K8S调度器 筛选插件扩展

为了熟悉 K8S调度器扩展步骤,目前只修改 筛选 插件

  1. 准备环境(到GitHub直接下载压缩包,然后解压,解压要在Linux系统下完成)

2. 编写调度器插件代码

在 Kubernetes 源代码目录下编写调度插件代码。我们将在 pkg/scheduler/framework/plugins/ 目录下创建一个新的插件目录。

cd pkg/scheduler/framework/plugins/
mkdir highcomm
cd highcomm

highcomm 目录中,创建 highcomm.go 文件,这是插件的核心代码。

插件代码 highcomm.go

package highcomm

import (
    "context"
     v1 "k8s.io/api/core/v1"
    "k8s.io/apimachinery/pkg/runtime"
    "k8s.io/kubernetes/pkg/scheduler/framework"
)

const (
    // 插件名称
    Name = "HighCommPodFilter"
)

// 定义插件结构体
type HighCommPodFilter struct {
    handle framework.Handle
}

// 构造函数
func New(obj runtime.Object, handle framework.Handle) (framework.Plugin, error) {
    return &HighCommPodFilter{handle: handle}, nil
}

// 实现 Filter 接口
func (f *HighCommPodFilter) Filter(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status {
    // 检查 Pod 是否有 high-comm 标签
    if _, exists := pod.Labels["high-comm"]; !exists {
        return framework.NewStatus(framework.Success)
    }

    // 检查节点是否带有 RDMA 标签
    if value, exists := nodeInfo.Node().Labels["node.kubernetes.io/rdma-enabled"]; !exists || value != "true" {
        return framework.NewStatus(framework.Unschedulable, "Node does not support RDMA")
    }

    return framework.NewStatus(framework.Success)
}

// Name 返回插件名称
func (f *HighCommPodFilter) Name() string {
    return Name
}
func (r *RdmaAware) Filter(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status {
    // 检查 Pod 是否有 "high-comm=true" 标签
    if value, ok := pod.Labels["high-comm"]; !ok || value != "true" {
        return framework.NewStatus(framework.Unschedulable, "Pod does not have high-comm=true label")
    }

    // 检查节点是否有 "rdma=true" 标签
    node := nodeInfo.Node()
    if value, ok := node.Labels["rdma"]; !ok || value != "true" {
        return framework.NewStatus(framework.Unschedulable, "Node does not support RDMA")
    }

    // 允许调度
    return nil
}

在这个插件中,我们定义了一个 Filter 函数,只有带有 high-comm 标签的 Pod 才会被筛选检查,而具备 node.kubernetes.io/rdma-enabled=true 标签的节点会被认为合适。

注册插件

kubernetes/pkg/scheduler/framework/plugins/registry.go 文件中注册插件:

import (
    "k8s.io/kubernetes/pkg/scheduler/framework/plugins/highcomm" // 添加此行
)

// 在 PluginRegistry 中注册插件
func NewDefaultRegistry() Registry {
    return Registry{
        ...
        highcomm.Name: highcomm.New, // 注册新插件
    }
}

3.编译调度器插件

使用这个命令  make WHAT=cmd/kube-scheduler

4.创建调度器配置文件

创建一个 kube-scheduler 的配置文件 kube-scheduler-config.yaml,启用 HighCommPodFilter 插件:

apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
clientConnection:
    kubeconfig: /etc/kubernetes/scheduler.conf
profiles:
  - schedulerName: default-scheduler
    plugins:
      filter:
        enabled:
          - name: HighCommPodFilter

5. 创建调度器镜像

将自定义的 kube-scheduler 二进制文件打包为 Docker 镜像,以便在 Kubernetes 中运行:

(现在dockerhub我不能拉取构建需要的K8S相关镜像,所以我这个步骤是在原来的调度镜像上构建自己的调度器镜像)

sudo docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.7 k8s.gcr.io/kube-scheduler:v1.22.7
# Dockerfile
FROM k8s.gcr.io/kube-scheduler:v1.22.7
COPY _output/bin/kube-scheduler /usr/local/bin/kube-scheduler
COPY kube-scheduler-config.yaml /etc/kubernetes/kube-scheduler-config.yaml

CMD ["kube-scheduler", "--config=/etc/kubernetes/kube-scheduler-config.yaml"]

上面如果构建不成功,说找不到,就把文件复制到K8S主目录下

# Dockerfile
FROM k8s.gcr.io/kube-scheduler:v1.22.7
COPY kube-scheduler /usr/local/bin/kube-scheduler
COPY kube-scheduler-config.yaml /etc/kubernetes/kube-scheduler-config.yaml

CMD ["kube-scheduler", "--config=/etc/kubernetes/kube-scheduler-config.yaml"]

构建镜像:

docker build -t custom-kube-scheduler:v1.22.7 .

6. 部署自定义调度器

这一步修改了4个地方

apiVersion: v1
kind: Pod
metadata:
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
    - name: kube-scheduler
      image: custom-kube-scheduler:v1.22.7
      command:
        - "/usr/local/bin/kube-scheduler"
        - --config=/etc/kubernetes/kube-scheduler-config.yaml
      volumeMounts:
        - mountPath: /etc/kubernetes/kube-scheduler-config.yaml
          name: schedulerconfig
          readOnly: true
  volumes:
    - hostPath:
        path: /etc/kubernetes/kube-scheduler-config.yaml
        type: FileOrCreate
      name: schedulerconfig

7. 验证调度器插件

# 给一个节点添加 RDMA 标签
kubectl label node <rdma-node> node.kubernetes.io/rdma-enabled=true

# 确保其他节点没有该标签
kubectl label node <non-rdma-node> node.kubernetes.io/rdma-enabled-
创建高通信需求的 Pod
  • 创建 YAML 文件:将 Pod 配置写入到一个 YAML 文件中,例如 high-comm-pod.yaml
  • 内容:将以下内容粘贴到 high-comm-pod.yaml 文件中:

创建带有 high-comm 标签的测试 Pod:

apiVersion: v1
kind: Pod
metadata:
  name: no-selector-pod
  labels:
    high-comm: "true"
spec:
  containers:
    - name: nginx
      image: nginx:1.7.9

如果所有节点没有标签,不会调度
apiVersion: v1
kind: Pod
metadata:
  name: wrong-selector-pod
spec:
  containers:
    - name: nginx
      image: nginx:1.7.9

kubectl apply -f high-comm-pod.yaml

调度器还是原来的名字,但是功能实现了

1.

sudo systemctl restart kubelet