十四、K8s弹性能力:基于KEDA的下一代弹性伸缩
文章目录
1、K8s 自动化扩缩容——HPA
1.1 什么是HPA?
HPA是指K8s水平Pod自动扩缩容(Horizontal Pod Autoscaler)是一个K8s原生的自动化伸缩工具。主要用于根据服务的度量指标(如CPU使用率、内存使用率或其他自定义指标)自动调整服务的副本。
HPA可以通过增加或减少工作负载的副本数来确保应用程序能够处理当前的流量和负载,同时避免资源浪费。
1.2 工作流程
1.3 HPA主要事项
- 必须安装metrics-server或其他自定义metrics-server
- 必须配置requests参数
- 不能扩容无法缩容的对象,比如DaemonSet
1.4 基于 CPU 的弹性伸缩
# 创建一个测试的 Deployment:
[root@k8s-master01 ~]# kubectl create deploy nginx-server --image=crpi-q1nb2n896zwtcdts.cn-beijing.personal.cr.aliyuncs.com/ywb01/nginx:1.15 --dry-run=client -oyaml > nginx-server.yaml
[root@k8s-master01 ~]# vim nginx-server.yaml
[root@k8s-master01 ~]# cat nginx-server.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-server
name: nginx-server
spec:
replicas: 1
selector:
matchLabels:
app: nginx-server
template:
metadata:
labels:
app: nginx-server
spec:
containers:
- image: crpi-q1nb2n896zwtcdts.cn-beijing.personal.cr.aliyuncs.com/ywb01/nginx:1.15
name: nginx
resources:
requests:
memory: "64Mi"
cpu: "10m"
limits:
memory: "128Mi"
cpu: "500m"
[root@k8s-master01 ~]# kubectl create -f nginx-server.yaml
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-server-679757959f-8dvql 1/1 Running 0 118s
# 创建一个 Service:
[root@k8s-master01 ~]# kubectl expose deployment nginx-server --port=80
[root@k8s-master01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-server ClusterIP 10.108.227.115 <none> 80/TCP 7s
# 创建一个 HPA,指定当 CPU 平均使用率达到 10%时触发扩容,且最大副本数为 10:
[root@k8s-master01 ~]# kubectl autoscale deployment nginx-server --cpu-percent=10 --min=1 --max=10
# 查看创建的 HPA
[root@k8s-master01 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-server Deployment/nginx-server cpu: <unknown>/10% 1 10 0 5s
进行压力测试:
[root@k8s-master01 ~]# while true; do wget -q -O- http://10.108.227.115 > /dev/null; done
# 查看 HPA:
[root@k8s-master01 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-server Deployment/nginx-server cpu: 80%/10% 1 10 1 5m11s
# 查看扩容详情:
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-server-679757959f-8dvql 1/1 Running 0 8m26s
nginx-server-679757959f-fkg9h 1/1 Running 0 10s
nginx-server-679757959f-sd9gx 0/1 ContainerCreating 0 10s
nginx-server-679757959f-x8lck 0/1 ContainerCreating 0 10s
# 结束压测之后pod会缩容1个
[root@k8s-master01 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-server Deployment/nginx-server cpu: 0%/10% 1 10 1 12m
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-server-679757959f-8dvql 1/1 Running 0 16m
1.5 基于内存的弹性伸缩(生产不推荐,仅作演示)
# 创建一个测试的 Deployment:
[root@k8s-master01 ~]# kubectl create deploy memory-consumer --image=crpi-q1nb2n896zwtcdts.cn-beijing.personal.cr.aliyuncs.com/ywb01/stress:latest --dry-run=client -oyaml > memory-consumer.yaml
[root@k8s-master01 ~]# vim memory-consumer.yaml
[root@k8s-master01 ~]# cat memory-consumer.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: memory-consumer
name: memory-consumer
spec:
replicas: 1
selector:
matchLabels:
app: memory-consumer
template:
metadata:
labels:
app: memory-consumer
spec:
containers:
- image: crpi-q1nb2n896zwtcdts.cn-beijing.personal.cr.aliyuncs.com/ywb01/stress:latest
name: stress
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "1024Mi"
cpu: "500m"
args:
- stress
- --vm
- "1" # 启动的线程数
- --vm-bytes
- "64M" # 每个线程消耗的资源
- --verbose
- --vm-hang
- "3600"
[root@k8s-master01 ~]# kubectl create -f memory-consumer.yaml
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
memory-consumer-57b795c9ff-prsmg 1/1 Running 0 36s
# 查看内存使用率:
[root@k8s-master01 ~]# kubectl top po
NAME CPU(cores) MEMORY(bytes)
memory-consumer-57b795c9ff-prsmg 0m 64Mi
# 创建一个 HPA,用于内存到 80%的时候进行扩容:
[root@k8s-master01 ~]# vim memory-consumer-hpa.yaml
[root@k8s-master01 ~]# cat memory-consumer-hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: memory-consumer-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: memory-consumer
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
# 创建资源:
[root@k8s-master01 ~]# kubectl create -f memory-consumer-hpa.yaml
# 查看 HPA:
[root@k8s-master01 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
memory-consumer-hpa Deployment/memory-consumer memory: 50%/80% 1 5 1 4m36s
# 压测,将pod线程改为2
[root@k8s-master01 ~]# kubectl edit deploy memory-consumer
....
args:
- stress
- --vm
- "2" # 将线程数改为2
- --vm-bytes
- "64M"
- --verbose
- --vm-hang
- "3600"
...
[root@k8s-master01 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
memory-consumer-hpa Deployment/memory-consumer memory: 100%/80% 1 5 3 8m37s
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
memory-consumer-766c57b488-9s9x5 1/1 Running 0 2m14s
memory-consumer-766c57b488-gmq46 1/1 Running 0 69s
memory-consumer-766c57b488-ntdd8 1/1 Running 0 9s
memory-consumer-766c57b488-pvnpd 1/1 Running 0 99s
memory-consumer-766c57b488-x67ts 1/1 Running 0 39s
2、K8s 下一代弹性伸缩:KEDA
2.1 为什么需要KEDA?
- 基于事件扩缩容
- 基于消息队列扩缩容
- 基于流量扩缩容
- 基于自定义指标扩缩容
- 基于各种策略扩缩容
2.2 什么是KEDA?
KEDA(全称:Kubernetes Event-Driven Autoscaler)是一个基于K8s的事件驱动自动伸缩器。使用KEDA,可以根据需要处理的事件数量、消息队列来驱动K8s中任何服务的伸缩。
KEDA的核心思想是:只有有任务需要处理时,才扩展应用程序,并且在没有工作时缩减资源,甚至可以将副本缩容到零。这不仅提高了资源利用率,还降低了成本。
2.3 KEDA使用场景
- 基于消息队列的任务处理:KEDA支持从各类消息的数量自动扩容缩容相关服务
- 基于HTTP请求的扩缩容:KEDA支持从HTTP请求的数量自动扩缩容相关服务
- 定时任务和批处理:KEDA支持时间的触发器,定时进行扩缩容
- 无服务架构:KEDA支持Scale to Zero,功能和Serverless类似
2.4 KEDA架构及工作流程
- ScaledObject:KEDA核心资源,用于定义扩缩容规则
- Controller:KEDA控制器,监听APIServer的KEDA对象,并根据规则通知Scaler调整副本数量
- Scaler:Scaler与HPA协同工作,以实现自动扩展
- External Trigger Source:外部事件或数据源,可以触发KEDA的扩展操作
- Metrics Adapter:用于定义自定义指标
2.5 KEDA核心资源
- ScaledObject:用于控制Deployment等资源的副本数,可以指定多种事件和消息来源控制资源的副本数,同时支持Scale to Zero
- ScaledJob:用于触发一次性Job任务,可以根据多种外部事件源触发创建一次性任务,主要用于处理批处理任务或临时任务,类似Serverless
- TriggerAuthentication:用于管理KEDA Scaler与外部事件源(如RabbitMQ、AWS SQS、Azure Queue等)之间的身份验证和授权,支持环境变量、ConfigMap、Secret等
2.6 KEDA 安装
# 添加 KEDA 的 Helm 源:
[root@k8s-master01 ~]# helm repo add kedacore https://kedacore.github.io/charts
[root@k8s-master01 ~]# helm repo update
# 安装 KEDA:
[root@k8s-master01 ~]# helm install keda kedacore/keda --namespace keda --create-namespace
# 查看服务状态:
[root@k8s-master01 ~]# kubectl get po -n keda
NAME READY STATUS RESTARTS AGE
keda-admission-webhooks-7fc99cdd4d-xj54z 1/1 Running 0 6m41s
keda-operator-54ffcbbfd6-xkn8j 1/1 Running 1 (3m6s ago) 6m41s
keda-operator-metrics-apiserver-c5b6f8b88-l4fpv 1/1 Running 0 6m41s
# 查看自定义资源:
[root@k8s-master01 ~]# kubectl api-resources | grep keda
cloudeventsources eventing.keda.sh/v1alpha1 true CloudEventSource
clustercloudeventsources eventing.keda.sh/v1alpha1 false ClusterCloudEventSource
clustertriggerauthentications cta,clustertriggerauth keda.sh/v1alpha1 false ClusterTriggerAuthentication
scaledjobs sj keda.sh/v1alpha1 true ScaledJob
scaledobjects so keda.sh/v1alpha1 true ScaledObject
triggerauthentications ta,triggerauth keda.sh/v1alpha1 true TriggerAuthentication
2.7 实战:周期性扩缩容
KEDA 支持周期性弹性收缩服务,且支持缩容至 0。假设有个服务只有每天早上 7-9 点属于业务高峰,就可以利用 KEDA 实现在 7-9 点扩展服务,除此之外的时间在缩减副本,以节省资源。
# 已知
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-server-679757959f-m9s2h 1/1 Running 0 9m31s
# 创建 Cron 类型的 ScaledObject:
[root@k8s-master01 ~]# vim cron.yaml
[root@k8s-master01 ~]# cat cron.yaml
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: cron-scaledobject
spec:
scaleTargetRef:
name: nginx-server
minReplicaCount: 0 # 最低副本,这里为了测试改成了0
cooldownPeriod: 300 # 冷却期,到 end 时间后,多久缩容
triggers:
- type: cron
metadata:
timezone: Asia/Shanghai
start: 00 07 * * *
end: 00 09 * * *
desiredReplicas: "3" # 扩容后的副本数
[root@k8s-master01 ~]# kubectl create -f cron.yaml
# 查看 ScaledObject 状态:
[root@k8s-master01 ~]# kubectl get so
NAME SCALETARGETKIND SCALETARGETNAME MIN MAX READY ACTIVE FALLBACK PAUSED TRIGGERS AUTHENTICATIONS AGE
cron-scaledobject apps/v1.Deployment nginx-server 0 True False False Unknown cron 48s
# 查看创建的 HPA:
[root@k8s-master01 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
keda-hpa-cron-scaledobject Deployment/nginx-server <unknown>/1 (avg) 1 100 0 101s
# pod已被删除
[root@k8s-master01 ~]# kubectl get po
No resources found in default namespace.
# 删除
[root@k8s-master01 ~]# kubectl delete -f cron.yaml
2.8 实战:基于 RabbitMQ 消息队列扩缩容
KEDA 支持基于消息队列的弹性伸缩,比如基于 RabbitMQ、Kafka、Redis 队列进行扩缩容,以便更快的处理处理。
2.8.1 创建一个 RabbitMQ
[root@k8s-master01 ~]# kubectl create deploy rabbitmq --image=crpi-q1nb2n896zwtcdts.cn-beijing.personal.cr.aliyuncs.com/ywb01/rabbitmq:4.0.5 --dry-run=client -oyaml > rabbitmq.yaml
[root@k8s-master01 ~]# vim rabbitmq.yaml
[root@k8s-master01 ~]# cat rabbitmq.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rabbitmq
name: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- image: crpi-q1nb2n896zwtcdts.cn-beijing.personal.cr.aliyuncs.com/ywb01/rabbitmq:4.0.5
name: rabbitmq
env:
- name: TZ
value: Asia/Shanghai
- name: LANG
value: C.UTF-8
- name: RABBITMQ_DEFAULT_USER
value: user
- name: RABBITMQ_DEFAULT_PASS
value: password
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5672
name: web
protocol: TCP
[root@k8s-master01 ~]# kubectl create -f rabbitmq.yaml
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
rabbitmq-6cf8c54879-8hxx6 1/1 Running 0 26s
2.8.2 创建 RabbitMQ-Service
# 首先创建一个 RabbitMQ-Service
[root@k8s-master01 ~]# vim rabbitmq-svc.yaml
[root@k8s-master01 ~]# cat rabbitmq-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
ports:
- name: web
port: 5672
protocol: TCP
targetPort: 5672
- name: http
port: 15672
protocol: TCP
targetPort: 15672
selector:
app: rabbitmq
sessionAffinity: None
type: NodePort
[root@k8s-master01 ~]# kubectl create -f rabbitmq-svc.yaml
[root@k8s-master01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rabbitmq NodePort 10.109.183.122 <none> 5672:32187/TCP,15672:30132/TCP 5s
访问测试:IP:30132
2.8.3 使用一个 Job 模拟写入消息:
[root@k8s-master01 ~]# vim rabbitmq-publish-job.yaml
[root@k8s-master01 ~]# cat rabbitmq-publish-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: rabbitmq-publish
spec:
template:
spec:
containers:
- name: rabbitmq-client
image: crpi-q1nb2n896zwtcdts.cn-beijing.personal.cr.aliyuncs.com/ywb01/rabbitmq-publish:v1
imagePullPolicy: IfNotPresent
command:
- "send"
- "amqp://user:password@rabbitmq.default.svc.cluster.local:5672"
- "10" # 模拟的消息数
restartPolicy: Never
backoffLimit: 4
[root@k8s-master01 ~]# kubectl create -f rabbitmq-publish-job.yaml
[root@k8s-master01 ~]# kubectl get job
NAME STATUS COMPLETIONS DURATION AGE
rabbitmq-publish Complete 1/1 13s 23s
2.8.4 创建一个模拟消费消息的程序
[root@k8s-master01 ~]# vim rabbitmq-consumer.yaml
[root@k8s-master01 ~]# cat rabbitmq-consumer.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rabbitmq-consumer
name: rabbitmq-consumer
spec:
selector:
matchLabels:
app: rabbitmq-consumer
template:
metadata:
labels:
app: rabbitmq-consumer
spec:
containers:
- image: crpi-q1nb2n896zwtcdts.cn-beijing.personal.cr.aliyuncs.com/ywb01/rabbitmq-consumer:v1
name: rabbitmq-consumer
imagePullPolicy: Always
command:
- receive
args:
- "amqp://user:password@rabbitmq.default.svc.cluster.local:5672"
# 创建消费者
[root@k8s-master01 ~]# kubectl create -f rabbitmq-consumer.yaml
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
....
rabbitmq-consumer-7bd9dd88f5-rzbsd 1/1 Running 0 13s
消息已经被消费掉
2.8.5 创建一个认证的资源
[root@k8s-master01 ~]# vim rabbitmq-ta.yaml
[root@k8s-master01 ~]# cat rabbitmq-ta.yaml
apiVersion: v1
kind: Secret
metadata:
name: keda-rabbitmq-secret
stringData:
host: amqp://user:password@rabbitmq.default.svc.cluster.local:5672
---
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: keda-trigger-auth-rabbitmq-conn
spec:
secretTargetRef:
- parameter: host
name: keda-rabbitmq-secret
key: host
[root@k8s-master01 ~]# kubectl create -f rabbitmq-ta.yaml
[root@k8s-master01 ~]# kubectl get ta
NAME PODIDENTITY SECRET ENV VAULTADDRESS
keda-trigger-auth-rabbitmq-conn keda-rabbitmq-secret
2.8.6 创建 ScaledObject
[root@k8s-master01 ~]# vim rabbitmq-so.yaml
[root@k8s-master01 ~]# cat rabbitmq-so.yaml
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: rabbitmq-scaledobject
spec:
scaleTargetRef:
name: rabbitmq-consumer
pollingInterval: 5 # 检查周期,默认 5 秒(生产上要配置时间长点)
cooldownPeriod: 30 # 冷却时间,默认 300 秒(只对缩减到0副本才会生效)
minReplicaCount: 1 # 最小副本数
maxReplicaCount: 30 # 最大副本数
triggers:
- type: rabbitmq
metadata:
protocol: amqp
queueName: hello
mode: QueueLength
value: "50"
authenticationRef:
name: keda-trigger-auth-rabbitmq-conn
[root@k8s-master01 ~]# kubectl create -f rabbitmq-so.yaml
[root@k8s-master01 ~]# kubectl get so
NAME SCALETARGETKIND SCALETARGETNAME MIN MAX READY ACTIVE FALLBACK PAUSED TRIGGERS AUTHENTICATIONS AGE
rabbitmq-scaledobject apps/v1.Deployment rabbitmq-consumer 1 30 True True False Unknown rabbitmq keda-trigger-auth-rabbitmq-conn 7s
2.8.7 测试
[root@k8s-master01 ~]# kubectl delete -f rabbitmq-publish-job.yaml
job.batch "rabbitmq-publish" deleted
[root@k8s-master01 ~]# vim rabbitmq-publish-job.yaml
[root@k8s-master01 ~]# cat rabbitmq-publish-job.yaml
....
imagePullPolicy: IfNotPresent
command:
- "send"
- "amqp://user:password@rabbitmq.default.svc.cluster.local:5672"
- "300" # 修改成300
restartPolicy: Never
backoffLimit: 4
# 重新写入消息
[root@k8s-master01 ~]# kubectl create -f rabbitmq-publish-job.yaml
# pod被扩容到了6个
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
....
rabbitmq-6cf8c54879-8hxx6 1/1 Running 0 23m
rabbitmq-consumer-7bd9dd88f5-96vqr 1/1 Running 0 30s
rabbitmq-consumer-7bd9dd88f5-m8zf2 1/1 Running 0 30s
rabbitmq-consumer-7bd9dd88f5-qt5j5 1/1 Running 0 30s
rabbitmq-consumer-7bd9dd88f5-rzbsd 1/1 Running 0 3m15s
rabbitmq-consumer-7bd9dd88f5-sxr6n 1/1 Running 0 15s
....
消息在逐步被处理掉
# pod也会被缩容回1个
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
rabbitmq-6cf8c54879-8hxx6 1/1 Running 0 29m
rabbitmq-consumer-7bd9dd88f5-rzbsd 1/1 Running 0 8m42s
rabbitmq-publish-8x47k 0/1 Completed 0 5m59s
2.9 实战:基于 MySQL 数据扩缩容
2.9.1 创建一个 MySQL 实例
# 创建一个 MySQL 实例:
[root@k8s-master01 ~]# kubectl create deployment mysql --image=crpi-q1nb2n896zwtcdts.cn-beijing.personal.cr.aliyuncs.com/ywb01/mysql:8.0.20
# 设置 MySQL密码
[root@k8s-master01 ~]# kubectl set env deploy mysql MYSQL_ROOT_PASSWORD=password
# 创建service
[root@k8s-master01 ~]# kubectl expose deploy mysql --port 3306
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
mysql-74b85f7699-l8tml 1/1 Running 0 10s
[root@k8s-master01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
....
mysql ClusterIP 10.111.123.212 <none> 3306/TCP 58s
2.9.2 创建用于测试的库和表:
[root@k8s-master01 ~]# kubectl exec -ti mysql-74b85f7699-l8tml -- bash
root@mysql-74b85f7699-jjkcq:/# mysql -uroot -hmysql -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.20 MySQL Community Server - GPL
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create database dukuan;
Query OK, 1 row affected (0.01 sec)
mysql> use dukuan;
Database changed
mysql> CREATE TABLE orders (
-> id INT AUTO_INCREMENT PRIMARY KEY,
-> customer_name VARCHAR(100),
-> order_amount DECIMAL(10, 2),
-> status ENUM('pending', 'processed') DEFAULT 'pending',
-> created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
-> );
Query OK, 0 rows affected (0.03 sec)
2.9.3 创建模拟写入数据的程序
# 模拟写入数据的程序:
[root@k8s-master01 ~]# kubectl create job insert-orders-job --image=crpi-q1nb2n896zwtcdts.cn-beijing.personal.cr.aliyuncs.com/ywb01/mysql:insert
[root@k8s-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
insert-orders-job-csc6m 0/1 Completed 0 7s
....
# 查看数据,有很多pending的数据
mysql> select * from orders;
+-----+---------------+--------------+-----------+---------------------+
| id | customer_name | order_amount | status | created_at |
+-----+---------------+--------------+-----------+---------------------+
| 1 | David | 856.00 | processed | 2025-06-27 12:42:14 |
| 2 | Grace | 856.00 | processed | 2025-06-27 12:42:14 |
| 3 | Frank | 856.01 | processed | 2025-06-27 12:42:14 |
| 4 | Judy | 856.00 | pending | 2025-06-27 12:42:14 |
| 5 | Alice | 856.01 | pending | 2025-06-27 12:42:14 |
| 6 | Charlie | 856.01 | pending | 2025-06-27 12:42:14 |
| 7 | Eve | 856.00 | pending | 2025-06-27 12:42:14 |
| 8 | Heidi | 856.01 | processed | 2025-06-27 12:42:14 |
| 9 | Eve | 856.01 | processed | 2025-06-27 12:42:14 |
| 10 | Heidi | 856.01 | pending | 2025-06-27 12:42:14 |
| 11 | David | 856.00 | processed | 2025-06-27 12:42:15 |
....
2.9.4 创建数据处理程序
# 创建数据处理程序
[root@k8s-master01 ~]# kubectl create deploy update-orders --image=crpi-q1nb2n896zwtcdts.cn-beijing.personal.cr.aliyuncs.com/ywb01/mysql:process
[root@k8s-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
....
update-orders-b6b5bc695-rq55m 1/1 Running 0 6s
# 查看数据,数据都处理完毕了
mysql> select * from orders;
+-----+---------------+--------------+-----------+---------------------+
| id | customer_name | order_amount | status | created_at |
+-----+---------------+--------------+-----------+---------------------+
| 1 | David | 856.00 | processed | 2025-06-27 12:42:14 |
| 2 | Grace | 856.00 | processed | 2025-06-27 12:42:14 |
| 3 | Frank | 856.01 | processed | 2025-06-27 12:42:14 |
| 4 | Judy | 856.00 | processed | 2025-06-27 12:42:14 |
| 5 | Alice | 856.01 | processed | 2025-06-27 12:42:14 |
| 6 | Charlie | 856.01 | processed | 2025-06-27 12:42:14 |
| 7 | Eve | 856.00 | processed | 2025-06-27 12:42:14 |
| 8 | Heidi | 856.01 | processed | 2025-06-27 12:42:14 |
| 9 | Eve | 856.01 | processed | 2025-06-27 12:42:14 |
....
2.9.5 创建一个认证的资源
[root@k8s-master01 ~]# vim mysql-ta.yaml
[root@k8s-master01 ~]# cat mysql-ta.yaml
apiVersion: v1
kind: Secret
metadata:
name: keda-mysql-secret
stringData:
mysql_conn_str: root:password@tcp(mysql.default.svc.cluster.local:3306)/dukuan
---
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: keda-trigger-auth-mysql-conn
spec:
secretTargetRef:
- parameter: connectionString
name: keda-mysql-secret
key: mysql_conn_str
[root@k8s-master01 ~]# kubectl create -f mysql-ta.yaml
[root@k8s-master01 ~]# kubectl get ta
NAME PODIDENTITY SECRET ENV VAULTADDRESS
keda-trigger-auth-mysql-conn keda-mysql-secret
2.9.6 创建 ScaledObject
[root@k8s-master01 ~]# vim mysql-so.yaml
[root@k8s-master01 ~]# cat mysql-so.yaml
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: mysql-scaledobject
spec:
scaleTargetRef:
name: update-orders
pollingInterval: 5
cooldownPeriod: 30
minReplicaCount: 1
maxReplicaCount: 30
triggers:
- type: mysql
metadata:
queryValue: "4.4"
query: "SELECT COUNT(*) FROM orders WHERE status='pending'"
authenticationRef:
name: keda-trigger-auth-mysql-conn
[root@k8s-master01 ~]# kubectl create -f mysql-so.yaml
[root@k8s-master01 ~]# kubectl get so
NAME SCALETARGETKIND SCALETARGETNAME MIN MAX READY ACTIVE FALLBACK PAUSED TRIGGERS AUTHENTICATIONS AGE
mysql-scaledobject apps/v1.Deployment update-orders 1 30 True False False Unknown mysql keda-trigger-auth-mysql-conn 28s
[root@k8s-master01 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
keda-hpa-mysql-scaledobject Deployment/update-orders 0/4400m (avg) 1 30 1 63s
2.9.7 测试
# 再次写入数据
[root@k8s-master01 ~]# kubectl delete job insert-orders-job
[root@k8s-master01 ~]# kubectl create job insert-orders-job --image=crpi-q1nb2n896zwtcdts.cn-beijing.personal.cr.aliyuncs.com/ywb01/mysql:insert
# 查看扩容详情:
[root@k8s-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
insert-orders-job-c6c7t 0/1 Completed 0 87s
mysql-74b85f7699-l8tml 1/1 Running 0 21m
update-orders-b6b5bc695-4rcgb 1/1 Running 0 62s
update-orders-b6b5bc695-7b5jz 1/1 Running 0 62s
update-orders-b6b5bc695-982rb 1/1 Running 0 46s
update-orders-b6b5bc695-hfhl5 1/1 Running 0 77s
update-orders-b6b5bc695-lpjmj 1/1 Running 0 77s
update-orders-b6b5bc695-mhvb5 1/1 Running 0 46s
update-orders-b6b5bc695-pmlqt 1/1 Running 0 46s
update-orders-b6b5bc695-r9nd2 1/1 Running 0 46s
update-orders-b6b5bc695-rq55m 1/1 Running 0 16m
update-orders-b6b5bc695-tcngk 1/1 Running 0 62s
update-orders-b6b5bc695-tmqz7 1/1 Running 0 62s
update-orders-b6b5bc695-vvx92 1/1 Running 0 77s
# 消息处理完pod就会缩容回去
[root@k8s-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
insert-orders-job-c6c7t 0/1 Completed 0 6m47s
mysql-74b85f7699-l8tml 1/1 Running 0 27m
update-orders-b6b5bc695-4rcgb 1/1 Running 0 6m22s
2.10 实战:ScaledJob 实现任务处理
KEDA 可以使用 ScaledJob 实现单次或者临时的任务处理,用来处理一些数据,比如图片、视频等。
假设有一个需求,需要从 Redis 队列获取数据,然后进行处理,就可以使用 ScaledJob 实现。
2.10.1 首先创建一个 Redis 实例:
[root@k8s-master01 ~]# helm repo add bitnami https://charts.bitnami.com/bitnami
[root@k8s-master01 ~]# helm upgrade --install redis bitnami/redis --set global.imageRegistry=docker.kubeasy.com --set global.redis.password=dukuan --set architecture=standalone --set master.persistence.enabled=false --version 20.1.6
[root@k8s-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
redis-master-0 1/1 Running 0 2m34s
2.10.2 创建一个认证的资源
[root@k8s-master01 ~]# vim redis-ta.yaml
[root@k8s-master01 ~]# cat redis-ta.yaml
apiVersion: v1
kind: Secret
metadata:
name: keda-redis-secret
type: Opaque
stringData:
redis_username: ""
redis_password: "dukuan"
---
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: keda-trigger-auth-redis-conn
spec:
secretTargetRef:
- parameter: username
name: keda-redis-secret
key: redis_username
- parameter: password
name: keda-redis-secret
key: redis_password
[root@k8s-master01 ~]# kubectl create -f redis-ta.yaml
triggerauthentication.keda.sh/keda-trigger-auth-redis-conn created
[root@k8s-master01 ~]# kubectl get ta
NAME PODIDENTITY SECRET ENV VAULTADDRESS
keda-trigger-auth-redis-conn keda-redis-secret
2.10.3 创建模拟写入数据
# 测试写入数据:
[root@k8s-master01 ~]# kubectl exec -ti redis-master-0 -- bash
I have no name!@redis-master-0:/$ redis-cli -h redis-master -a dukuan
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
redis-master:6379> LPUSH test_list "t1" "t2" "t3"
(integer) 3
redis-master:6379> LRANGE test_list 0 2
1) "t3"
2) "t2"
3) "t1"
# 读取数据:
redis-master:6379> RPOP test_list
"t1"
redis-master:6379> LPOP test_list
"t3"
2.10.4 创建 ScaledObject
[root@k8s-master01 ~]# vim redis-sj.yaml
[root@k8s-master01 ~]# cat redis-sj.yaml
---
apiVersion: keda.sh/v1alpha1
kind: ScaledJob
metadata:
name: reids-scaledjob
spec:
jobTargetRef:
parallelism: 1
completions: 1
backoffLimit: 4
template:
spec:
containers:
- name: redis-queue-consumer
image: crpi-q1nb2n896zwtcdts.cn-beijing.personal.cr.aliyuncs.com/ywb01/redis:process
pollingInterval: 30
successfulJobsHistoryLimit: 3
minReplicaCount: 3
maxReplicaCount: 5
triggers:
- type: redis
metadata:
address: redis-master.default.svc.cluster.local:6379
listName: test_list
listLength: "5"
authenticationRef:
name: keda-trigger-auth-redis-conn
[root@k8s-master01 ~]# kubectl create -f redis-sj.yaml
[root@k8s-master01 ~]# kubectl get sj
NAME MIN MAX READY ACTIVE PAUSED TRIGGERS AUTHENTICATIONS AGE
reids-scaledjob 3 5 True True Unknown redis keda-trigger-auth-redis-conn 117s
2.10.5 测试
# 写入测试数据:
redis-master:6379> LPUSH test_list "t1" "t2" "t3"
(integer) 223
redis-master:6379> LPUSH test_list "t1" "t2" "t3"
(integer) 226
redis-master:6379> LPUSH test_list "t1" "t2" "t3"
(integer) 229
redis-master:6379> LPUSH test_list "t1" "t2" "t3"
(integer) 232
redis-master:6379> LPUSH test_list "t1" "t2" "t3"
(integer) 235
redis-master:6379> LRANGE test_list 0 8
1) "t3"
2) "t2"
3) "t1"
4) "t3"
5) "t2"
6) "t1"
7) "t3"
8) "t2"
9) "t1"
# 查看创建的 Job:
[root@k8s-master01 ~]# kubectl get job
NAME STATUS COMPLETIONS DURATION AGE
reids-scaledjob-6hvls Complete 1/1 32s 49s
reids-scaledjob-gnlwl Complete 1/1 32s 49s
reids-scaledjob-mq9c7 Complete 1/1 32s 49s
# 查看 Pod:
[root@k8s-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
redis-master-0 1/1 Running 0 7m42s
reids-scaledjob-6hvls-9hcfv 0/1 Completed 0 45s
reids-scaledjob-gnlwl-9htzg 0/1 Completed 0 45s
reids-scaledjob-mq9c7-tgtcl 0/1 Completed 0 45s
此博客来源于:https://edu.51cto.com/lecturer/11062970.html