K8s部署 Redis 主从集群

发布于:2025-09-15 ⋅ 阅读:(20) ⋅ 点赞:(0)
原文链接
一、 Redis集群说明

一般来说,Redis部署有三种模式。

1)单实例模式 一般用于测试环境。

2)哨兵模式 在redis3.0以前,要实现集群一般是借助哨兵sentinel工具来监控master节点的状态。如果master节点异常,则会做主从切换,将某一台slave作为master。引入了哨兵节点,部署更复杂,维护成本也比较高,并且性能和高可用性等各方面表现一般。

3)集群模式 3.0 后推出的 Redis 分布式集群解决方案;主节点提供读写操作,从节点作为备用节点,不提供请求,只作为故障转移使用;如果master节点异常,会自动做主从切换,将slave切换为master。

后两者用于生产部署,但总的来说,集群模式明显优于哨兵模式。k8s环境下,如何部署redis集群(三主三从)。

二、安装NFS

1、在 master 节点(也可以在其它节点)创建 NFS 存储

yum -y install  nfs-utils rpcbind

2、创建NFS共享文件夹

mkdir -p /var/nfs/redis/pv{1..6}

3、配置共享文件夹

vim  /etc/exports
 
/var/nfs/redis/pv1  *(rw,sync,no_root_squash)
/var/nfs/redis/pv2  *(rw,sync,no_root_squash)
/var/nfs/redis/pv3  *(rw,sync,no_root_squash)
/var/nfs/redis/pv4  *(rw,sync,no_root_squash)
/var/nfs/redis/pv5  *(rw,sync,no_root_squash)
/var/nfs/redis/pv6  *(rw,sync,no_root_squash)

4、使配置生效

exportfs -r

5、查看所有共享目录

exportfs -v

6、启动nfs

systemctl start nfs-server
systemctl enabled nfs-server
systemctl start rpcbind
systemctl enabled rpcbind

7、其他节点安装nfs-utils

yum -y install nfs-utils
三、创建PV卷

1、创建namespace

kubectl create ns redis-cluster

2、创建nfs 客户端sa授权

cat > redis-nfs-client-sa.yaml
 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: redis-nfs-client
  namespace: redis-cluster
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-runner
  namespace: redis-cluster
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get","list","watch","create","delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get","list","watch","create","delete"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get","list","watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get","list","watch","create","update","patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create","delete","get","list","watch","patch","update"]
 
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
  namespace: redis-cluster
subjects:
  - kind: ServiceAccount
    name: redis-nfs-client
    namespace: redis-cluster
roleRef:
  kind: ClusterRole
  name: nfs-client-runner
  apiGroup: rbac.authorization.k8s.io

3、创建nfs 客户端

cat > redis-nfs-client.yaml
 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-nfs-client
  labels:
    app: redis-nfs-client
  # replace with namespace where provisioner is deployed
  namespace: redis-cluster
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: redis-nfs-client
  template:
    metadata:
      labels:
        app: redis-nfs-client
    spec:
      serviceAccountName: redis-nfs-client
      containers:
        - name: redis-nfs-client
          ## image: quay.io/external_storage/nfs-client-provisioner:latest
          image: crpi-pl50bq1hplpmj5tc.cn-hangzhou.personal.cr.aliyuncs.com/aliyun-lee/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: redis-nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME   ## 这个名字必须与storegeclass里面的名字一致
              value:  my-redis-nfs
            - name: ENABLE_LEADER_ELECTION  ## 设置高可用允许选举,如果replicas参数等于1,可不用
              value: "True"
            - name: NFS_SERVER
              value: 192.168.1.81  #修改为自己的ip(部署nfs的机器ip)
            - name: NFS_PATH
              value: /var/nfs/redis     #修改为自己的nfs安装目录
      volumes:
        - name: redis-nfs-client-root
          nfs:
            server: 192.168.1.81 #修改为自己的ip(部署nfs的机器ip)
            path: /var/nfs/redis     #修改为自己的nfs安装目录

4、创建StoreClass

cat > redis-storeclass.yaml
 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: redis-nfs-storage
  namespace: redis-cluster
provisioner: my-redis-nfs

5、执行命令创建NFS sa授权、NFS客户端以及StoreClass

kubectl apply -f  redis-nfs-client-sa.yaml
 
kubectl apply -f  redis-nfs-client.yaml
 
kubectl apply -f  redis-storeclass.yaml

6、创建PV

创建6个PV卷,分别对应Redis的三主三从。

vim redis-pv.yaml
 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-nfs-pv1
spec:
  capacity:
    storage: 50M
  accessModes:
    - ReadWriteMany
  storageClassName: ""  # 必须设置为空字符串
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 192.168.1.81
    path: /var/nfs/redis/pv1
  claimRef:  # 关键:预绑定到 PVC
    namespace: redis-cluster
    name: redis-data-redis-0  # StatefulSet 生成的 PVC 名称
​
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-nfs-pv2
spec:
  capacity:
    storage: 50M
  accessModes:
    - ReadWriteMany
  storageClassName: ""  # 必须设置为空字符串
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 192.168.1.81
    path: /var/nfs/redis/pv2
  claimRef:  # 关键:预绑定到 PVC
    namespace: redis-cluster
    name: redis-data-redis-1  # StatefulSet 生成的 PVC 名称
​
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-nfs-pv3
spec:
  capacity:
    storage: 50M
  accessModes:
    - ReadWriteMany
  storageClassName: ""  # 必须设置为空字符串
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 192.168.1.81
    path: /var/nfs/redis/pv3
  claimRef:  # 关键:预绑定到 PVC
    namespace: redis-cluster
    name: redis-data-redis-2  # StatefulSet 生成的 PVC 名称
​
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-nfs-pv4
spec:
  capacity:
    storage: 50M
  accessModes:
    - ReadWriteMany
  storageClassName: ""  # 必须设置为空字符串
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 192.168.1.81
    path: /var/nfs/redis/pv4
  claimRef:  # 关键:预绑定到 PVC
    namespace: redis-cluster
    name: redis-data-redis-3  # StatefulSet 生成的 PVC 名称
​
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-nfs-pv5
spec:
  capacity:
    storage: 50M
  accessModes:
    - ReadWriteMany
  storageClassName: ""  # 必须设置为空字符串
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 192.168.1.81
    path: /var/nfs/redis/pv5
  claimRef:  # 关键:预绑定到 PVC
    namespace: redis-cluster
    name: redis-data-redis-4  # StatefulSet 生成的 PVC 名称
​
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-nfs-pv6
spec:
  capacity:
    storage: 50M
  accessModes:
    - ReadWriteMany
  storageClassName: ""  # 必须设置为空字符串
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 192.168.1.81
    path: /var/nfs/redis/pv6
  claimRef:  # 关键:预绑定到 PVC
    namespace: redis-cluster
    name: redis-data-redis-5  # StatefulSet 生成的 PVC 名称
​
​
kubectl apply -f redis-pv.yaml
四、搭建Redis集群

1、创建headless服务

cat > redis-hs.yaml
 
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s.kuboard.cn/layer: db
    k8s.kuboard.cn/name: redis
  name: redis-hs
  namespace: redis-cluster
spec:
  ports:
    - name: nnbary
      port: 6379
      protocol: TCP
      targetPort: 6379
  selector:
    k8s.kuboard.cn/layer: db
    k8s.kuboard.cn/name: redis
  clusterIP: None

2、创建redis.conf配置

bind 0.0.0.0
port 6379
daemonize no
 
# requirepass redis-cluster  
# 集群配置
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000

3、创建名称为redis-conf的Configmap

kubectl create configmap redis-conf --from-file=redis.conf -n redis-cluster

4、创建redis 对应pod集群

cat > redis.yaml
 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis
  namespace: redis-cluster
  labels:
    k8s.kuboard.cn/layer: db
    k8s.kuboard.cn/name: redis
spec:
  replicas: 6
  selector:
    matchLabels:
      k8s.kuboard.cn/layer: db
      k8s.kuboard.cn/name: redis
  serviceName: redis-hs
  template:
    metadata:
      labels:
        k8s.kuboard.cn/layer: db
        k8s.kuboard.cn/name: redis
    spec:
      terminationGracePeriodSeconds: 20
      containers:
        - name: redis
          image: redis
          command:
              - "redis-server"
          args:
              - "/etc/redis/redis.conf"
              - "--protected-mode"
              - "no"
# 增加:为了解决节点重启后IP地址变化导致集群不可用问题(有待检验)
              - "--cluster-announce-ip"
              - "$(POD_IP)"
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
#1、启动参数增加了 --cluster-announce-ip, 其值来自环境变量 POD_IP. 注意 $(POD_ID) 这里是小括号而非大括号。
#2、这个环境变量的值又来自 status.podIP, 即当前 pod 的 ip. 每次 pod 启动时会分配一个不同的 ip, 通过 status.podIP 可以拿这个 ip,进而通知集群。
#参考: https://github.com/redis/redis/issues/4289
# 增加:为了解决节点重启后IP地址变化导致集群不可用问题
​
          ports:
            - name: redis
              containerPort: 6379
              protocol: "TCP"
            - name: cluster
              containerPort: 16379
              protocol: "TCP"
          volumeMounts:
            - name: "redis-conf"
              mountPath: "/etc/redis"
            - name: "redis-data"
              mountPath: "/data"
      volumes:
        - name: "redis-conf"
          configMap:
             name: "redis-conf"
             items:
                - key: "redis.conf"
                  path: "redis.conf"
  volumeClaimTemplates:
    - metadata:
        name: redis-data
      spec:
        accessModes: [ "ReadWriteMany" ]
        resources:
          requests:
            storage: 50M
        storageClassName: redis-nfs-storage

5、执行命令创建Service和Redis 集群Pod

kubectl apply -f  redis-hs.yaml
 
kubectl apply -f  redis.yaml

6、查看集群状态

1、 进入容器(可以通过k8s页面,选中节点,通过base或sh进来),然后 cd /usr/local/bin/
kubectl exec -it redis-0 -n redis-cluster -- bash
​
2、连接上Redis节点: redis-cli -c 
​
3、查看集群状态:cluster info
五、集群初始化

1、创建三主三从redis集群

kubectl exec -it redis-0 -n redis-cluster -- bash
​
redis-cli --cluster create \
  redis-0.redis-hs.redis-cluster:6379 \
  redis-1.redis-hs.redis-cluster:6379 \
  redis-2.redis-hs.redis-cluster:6379 \
  redis-3.redis-hs.redis-cluster:6379 \
  redis-4.redis-hs.redis-cluster:6379 \
  redis-5.redis-hs.redis-cluster:6379 \
  --cluster-replicas 1

2、查看集群状态

1、进入容器(可以通过k8s页面,选中节点,通过base或sh进来),然后 cd /usr/local/bin/
kubectl exec -it redis-0 -n redis-cluster -- bash
2、连接上Redis节点: redis-cli -c 
3、查看集群状态:cluster info
4、查看集群节点情况:cluster nodes

3、重建集群

(1) 进入每个 Redis Pod 执行数据清理
# 遍历所有 Redis Pod
for pod in redis-0 redis-1 redis-2 redis-3 redis-4 redis-5; do
  kubectl exec -it $pod -n redis-cluster -- redis-cli FLUSHALL          # 清空所有数据库
  kubectl exec -it $pod -n redis-cluster -- redis-cli CLUSTER RESET    # 重置集群配置
done
(2)重建集群命令调整
redis-cli --cluster create \
  redis-0.redis-hs.redis-cluster.svc.cluster.local:6379 \
  redis-1.redis-hs.redis-cluster.svc.cluster.local:6379 \
  redis-2.redis-hs.redis-cluster.svc.cluster.local:6379 \
  redis-3.redis-hs.redis-cluster.svc.cluster.local:6379 \
  redis-4.redis-hs.redis-cluster.svc.cluster.local:6379 \
  redis-5.redis-hs.redis-cluster.svc.cluster.local:6379 \
  --cluster-replicas 1 \
  --cluster-yes    # 自动确认槽位分配(可选)
六、测试主从切换
1、进入容器:kubectl exec -it redis-0 -n redis-cluster -- bash
2、连接上Redis节点: redis-cli -c 
3、查看节点角色:role
127.0.0.1:6379> role
1) "master"
2) (integer) 27342
3) 1) 1) "10.244.2.42"
      2) "6379"
      3) "27342"
可以看出,我们的redis-0节点,是Master节点,他的Slave 节点IP是10.244.2.42,即我们的redis-3节点。
可以查看redis-3节点角色进行验证。

1、删掉redis-0 Mster节点,观察情况

首先查看redis-0节点

kubectl get pod redis-0 -n redis-cluster -o wide

接着删除该Pod

kubectl delete pod redis-0 -n redis-cluster

再次进入redis-0和redis-3节点查看角色信息

七、开放外网端口
vim redis-access-service.yaml
​
---
apiVersion: v1
kind: Service
metadata:
  name: redis-access-service
  labels:
    app: redis-outip
spec:
  type: NodePort  # 添加此行
  ports:
    - name: redis-port
      protocol: "TCP"
      port: 6379
      targetPort: 6379
      nodePort: 30379
  selector:
    k8s.kuboard.cn/layer: db
    k8s.kuboard.cn/name: redis
​
​
kubectl apply -f redis-access-service.yaml -n redis-cluster
八、IDEA中配置使用Redis集群

确保在 application.ymlapplication.properties 中正确配置集群节点信息,而非单节点地址。

spring:
  redis:
    cluster:
    # 节点配置下面三种方式都可以
#      nodes: 10.244.1.46:6379,10.244.1.47:6379,10.244.2.41:6379,10.244.1.48:6379,10.244.2.43:6379,10.244.2.42:6379  # 所有主从节点地址
#      nodes: redis-0.redis-hs.redis-cluster:6379,redis-1.redis-hs.redis-cluster:6379,redis-2.redis-hs.redis-cluster:6379,redis-3.redis-hs.redis-cluster:6379,redis-4.redis-hs.redis-cluster:6379,redis-5.redis-hs.redis-cluster:6379  # 所有主从节点地址
      nodes: redis-hs.redis-cluster:6379  # 所有主从节点地址      
      max-redirects: 3  # 最大重定向次数
      timeout: 5000     # 超时时间(毫秒)
    lettuce:
      pool:
        enabled: true   # 启用连接池
        max-active: 8
        max-idle: 8