Kubernetes+GlusterFS使用用例

发布于:2023-03-28 ⋅ 阅读:(309) ⋅ 点赞:(0)

GlusterFS部署

▶ yum install centos-release-gluster -y
▶ yum install glusterfs-server
▶ mkfs.xfs /dev/vdd
▶ vim /etc/fstab
/dev/vdd /glusterfs xfs defaults 1 2
▶  mount -a && mount
▶ yum install glusterfs-server
▶ systemctl enable glusterd
▶ systemctl start glusterd
▶ systemctl status glusterd
▶ gluster peer probe sibat-kubernetes-02
▶ gluster peer probe sibat-kubernetes-03
▶ gluster peer probe sibat-kubernetes-04
▶ gluster peer probe sibat-kubernetes-05
▶  gluster peer status
▶ gluster volume create k8s-data replica 5 transport tcp sibat-kubernetes-01:/gluster sibat-kubernetes-02:/gluster sibat-kubernetes-03:/gluster sibat-kubernetes-04:/gluster sibat-kubernetes-05:/gluster force
▶ gluster volume list
k8s-data
▶ gluster volume info
Volume Name: k8s-data
Type: Replicate
Volume ID: 658b9a86-6cf8-4dcc-a8b8-e581381d5608
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 5 = 5
Transport-type: tcp
Bricks:
Brick1: sibat-kubernetes-01:/gluster
Brick2: sibat-kubernetes-02:/gluster
Brick3: sibat-kubernetes-03:/gluster
Brick4: sibat-kubernetes-04:/gluster
Brick5: sibat-kubernetes-05:/gluster
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
▶  gluster volume start k8s-data
volume start: k8s-data: success
▶  gluster volume quota k8s-data enable
volume quota : success
▶ gluster volume quota k8s-data limit-usage / 10000GB
volume quota : success
heketi
▶ yum install -y heketi heketi-client
▶ cat /etc/heketi/heketi.json
{
  "port": "18080",
  "use_auth": true,
  "jwt": {
    "admin": {
      "key": "brunutRaspuWRe1404"
    },
    "user": {
      "key": "brunutR2020"
    }
  },
  "glusterfs": {
    "executor": "ssh",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key",
      "user": "root",
      "port": "22",
      "fstab": "/etc/fstab"
    },
    "db": "/var/lib/heketi/heketi.db",
    "loglevel" : "debug"
  }
}
▶ cat /etc/heketi/topolgy_demo.json

{

  "clusters": [
    {
      "nodes": [
         {
           "node": {
               "hostnames": {
                   "manage": [
                      "sibat-kubernetes-1"
                    ],
                   "storage": [
                   "192.168.233.11"
                    ]
               },
          "zone": 1
         },
         "devices": [
            "/dev/vdc"
              ]
           },
         {  "node": {
               "hostnames": {
                   "manage": [
                      "sibat-kubernetes-2"
                    ],
                   "storage": [
                   "192.168.233.212"
                    ]
               },
          "zone": 1
         },

         "devices": [
            "/dev/vdc"
              ]
           },
         { "node": {
               "hostnames": {
                   "manage": [
                      "sibat-kubernetes-3"
                    ],
                   "storage": [
                   "192.168.233.108"
                    ]
               },
          "zone": 1
         },
         "devices": [
            "/dev/vdc"
              ]
           },
         { "node": {
               "hostnames": {
                   "manage": [
                      "sibat-kubernetes-4"
                    ],
                   "storage": [
                   "192.168.233.64"
                    ]
               },
          "zone": 1
         },
         "devices": [
            "/dev/vdc"
              ]
           },
         { "node": {
               "hostnames": {
                   "manage": [
                      "sibat-kubernetes-5"
                    ],
                   "storage": [
                   "192.168.233.96"
                    ]
               },
          "zone": 1
         },
         "devices": [
            "/dev/vdc"
              ]
           }
        ]
     }
   ]
}
▶  heketi-cli --server [http://192.168.233.247:18080](http://192.168.233.247:18080/) --user admin --secret brunutRaspuWRe1404 topology load --json=/etc/heketi/topolgy.json
    Found node sibat-kubernetes-01 on cluster 0c085268e5bc20f7ac434d6aaddc4ca6
        Adding device /dev/vdd ... OK
    Found node sibat-kubernetes-03 on cluster 0c085268e5bc20f7ac434d6aaddc4ca6
        Adding device /dev/vdd ... OK
    Found node sibat-kubernetes-02 on cluster 0c085268e5bc20f7ac434d6aaddc4ca6
        Adding device /dev/vdd ... Unable to add device: Setup of device /dev/vdd failed (already initialized or contains data?):   Device /dev/vdd not found.
    Found node sibat-kubernetes-04 on cluster 0c085268e5bc20f7ac434d6aaddc4ca6
        Adding device /dev/vdd ... OK
    Found node sibat-kubernetes-05 on cluster 0c085268e5bc20f7ac434d6aaddc4ca6
        Adding device /dev/vdd ... OK
▶ ssh-keygen -f /etc/heketi/heketi\_key -t rsa -N ''
Generating public/private rsa key pair.
Your identification has been saved in /etc/heketi/heketi\_key.
Your public key has been saved in /etc/heketi/heketi\_key.pub.
The key fingerprint is:
SHA256:KphU9aFUhWCzm75QA1grMIm6MWILFoMGVPDAQcBYYFM root@sibat-kubernetes-1
The key's randomart image is:
+---\[RSA 2048\]----+
|^%\*E. \*ooo.      |
|\*O+o = =..       |
|o =.+ o .        |
|\*o o . o         |
|=+o   = S        |
|.o o o o         |
|  o o o          |
|     o .         |
|      .          |
+----\[SHA256\]-----+
2020年 04月 14日 星期二 23:32:05 EDT sibat-kubernetes-1 root:~
▶ chown heketi:heketi  /etc/heketi/heketi\*
▶ heketi-cli cluster info 0c085268e5bc20f7ac434d6aaddc4ca6
Cluster id: 0c085268e5bc20f7ac434d6aaddc4ca6
Nodes:
5d27f7967a32032fc7343ef51f9c139e
64f24f93443adbccb47bf0dc52a8ca85
987050d88911b2d8dec7faf796f88b76
a6d29c2fd347dae4a646cde5937c5dd6
a8f08e5c6fb6c1020796efcbcad9c06a
Volumes:

Block: true

File: true
▶ heketi-cli volume create --size=5
Name: vol_b91e90468865c4bf1518b6943882be5e
Size: 5
Volume Id: b91e90468865c4bf1518b6943882be5e
Cluster Id: 0c085268e5bc20f7ac434d6aaddc4ca6
Mount: 192.168.233.247:vol_b91e90468865c4bf1518b6943882be5e
Mount Options: backup-volfile-servers=192.168.233.69,192.168.233.142,192.168.233.194,192.168.233.64
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 3

创建StorageClass

▶ vim gluster-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: true
    storageclass.kubernetes.io/is-default-class: true
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
parameters:
  resturl: "http://192.168.233.11:18080"
  restauthenabled: "true"
  restuser: "admin"
  restuserkey: "brunutRaspuWRe1404"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "replicate:3" 

使用用例

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: jenkins
  namespace: devops
  labels:
    name: jenkins
spec:
  selector:
    matchLabels:
      name: jenkins
  serviceName: jenkins
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      name: jenkins
      labels:
        name: jenkins
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: jenkins
      containers:
        - name: jenkins
          image: jenkins/jenkins:lts
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
            - containerPort: 50000
          resources:
            limits:
              cpu: 4
              memory: 8Gi
            requests:
              cpu: 0.5
              memory: 500Mi
          env:
            - name: LIMITS_MEMORY
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            - name: JAVA_OPTS
              # value: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
              value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
            - name: JENKINS_OPTS
              value: --prefix=/jenkins
          volumeMounts:
            - name: jenkins-home
mountPath: /var/jenkins_home
          livenessProbe:
            httpGet:
path: /jenkins/login
port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12 # ~2 minutes
          readinessProbe:
            httpGet:
path: /jenkins/login
port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12 # ~2 minutes
      securityContext:
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: jenkins-home
      annotations:
volume.beta.kubernetes.io/storage-class: glusterfs
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
requests:
  storage: 100Gi
---
apiVersion: v1
kind: Service
metadata:
  name: jenkins
  namespace: devops
spec:
  type: NodePort
  selector:
    name: jenkins
  # ensure the client ip is propagated to avoid the invalid crumb issue when using LoadBalancer (k8s >=1.7)
  #externalTrafficPolicy: Local
  ports:
    -
      name: http
      port: 80
      targetPort: 8080
      nodePort: 32000
      protocol: TCP
    -
      name: agent
      port: 50000
      protocol: TCP

PVC使用实例

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jnlp-mvn
  namespace: devops
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: glusterfs
  resources:
    requests:
      storage: 50Gi

网站公告

今日签到

点亮在社区的每一天
去签到