欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

K8S 的 Volume[本地磁盘]

程序员文章站 2024-03-11 19:13:19
...

K8S 有很多 Volume 类型,详细信息可以查看官网https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes;这里只介绍几种:emptyDir,local,hostPath

local

使 Volume 和 Pod 被分配到同一个 Node,实现数据的本地化,适合一些对读写磁盘数据要求比较高的,比如数据库

这种做法的风险是 Volume 和 Pod 都被绑定到固定的一个 Node (暂时不支持动态配置),一但这个 Node 出问题,会影响 Volume 和 Pod,可用性会降低

先创建一个 StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

这里 WaitForFirstConsumer 表示不要立即绑定 Volume,而是等到 Pod 调度的时候再绑定

创建 PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: tidb-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/disks
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k8s-node

这个 PV 使用k8s-node 节点的 /mnt/disks 目录,并且指定 StorageClass 为前面创建的 local-storage

创建 PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: tidb-pvc
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: local-storage

通过指定 storageClass 为前面创建的 local-storage,这样会等到 Pod 调度的时候再绑定 PVC 和 PV,确保能找到合适的 PV 使得 Volume 和 Pod 在同一个 Node 上

创建 Pod

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
  - name: cache-volume
    persistentVolumeClaim:
      claimName: tidb-pvc

等这个 Pod 创建的时候 example-pvc 再去绑定合适的 pv 使得 Pod 和 Volume 跑在同一个 Node

local-volume-provisioner动态创建pv

local-volume-provisioner 程序是一个 DaemonSet,会在每个 Kubernetes 工作节点上启动一个 Pod,地址:https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner,首先下载 local-volume-provisioner.yaml 内容如下:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: "local-storage"
provisioner: "kubernetes.io/no-provisioner"
volumeBindingMode: "WaitForFirstConsumer"

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: local-provisioner-config
  namespace: kube-system
data:
  setPVOwnerRef: "true"
  nodeLabelsForPV: |
    - kubernetes.io/hostname
  storageClassMap: |
    local-storage:
      hostDir: /mnt/disks
      mountDir: /mnt/disks

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: local-volume-provisioner
  namespace: kube-system
  labels:
    app: local-volume-provisioner
spec:
  selector:
    matchLabels:
      app: local-volume-provisioner
  template:
    metadata:
      labels:
        app: local-volume-provisioner
    spec:
      serviceAccountName: local-storage-admin
      containers:
        - image: "quay.io/external_storage/local-volume-provisioner:v2.3.4"
          name: provisioner
          securityContext:
            privileged: true
          env:
          - name: MY_NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: MY_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: JOB_CONTAINER_IMAGE
            value: "quay.io/external_storage/local-volume-provisioner:v2.3.4"
          resources:
            requests:
              cpu: 100m
              memory: 100Mi
            limits:
              cpu: 100m
              memory: 100Mi
          volumeMounts:
            - mountPath: /etc/provisioner/config
              name: provisioner-config
              readOnly: true
            # mounting /dev in DinD environment would fail
            # - mountPath: /dev
            #   name: provisioner-dev
            - mountPath: /mnt/disks
              name: local-disks
              mountPropagation: "HostToContainer"
      volumes:
        - name: provisioner-config
          configMap:
            name: local-provisioner-config
        # - name: provisioner-dev
        #   hostPath:
        #     path: /dev
        - name: local-disks
          hostPath:
            path: /mnt/disks

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: local-storage-admin
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: local-storage-provisioner-pv-binding
  namespace: kube-system
subjects:
- kind: ServiceAccount
  name: local-storage-admin
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: system:persistent-volume-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: local-storage-provisioner-node-clusterrole
  namespace: kube-system
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: local-storage-provisioner-node-binding
  namespace: kube-system
subjects:
- kind: ServiceAccount
  name: local-storage-admin
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: local-storage-provisioner-node-clusterrole
  apiGroup: rbac.authorization.k8s.io

挂载磁盘,其Provisioner本身其并不提供local volume,但它在各个节点上的provisioner会去动态的“发现”挂载点(discovery directory),当某node的provisioner在/mnt/disks目录下发现有挂载点时,会创建PV,该PV的local.path就是挂载点,并设置nodeAffinity为该node。lvp.sh

#!/bin/bash
for i in $(seq 1 5); do
  mkdir -p /mnt/disks-bind/vol${i}
  mkdir -p /mnt/disks/vol${i}
  mount --bind /mnt/disks-bind/vol${i} /mnt/disks/vol${i}
done

创建pvc

#创建挂在文件夹  每个节点都要执行
sh ./lvp.sh

#拉去镜像
docker pull quay.io/external_storage/local-volume-provisioner:v2.3.4
#docker save -o local-volume-provisioner-v2.3.4.tar quay.io/external_storage/local-volume-provisioner:v2.3.4
#docker load -i local-volume-provisioner-v2.3.4.tar

kubectl apply -f local-volume-provisioner.yaml
kubectl get po -n kube-system -l app=local-volume-provisioner && kubectl get pv | grep local-storage

期望的结果:

[email protected]:~# kubectl get po -n kube-system -l app=local-volume-provisioner && kubectl get pv | grep local-storage
NAME                             READY   STATUS    RESTARTS   AGE
local-volume-provisioner-wsvbf   1/1     Running   0          14m
local-volume-provisioner-xmn8t   1/1     Running   0          14m
local-pv-3a6d09fd                          61Gi       RWO            Delete           Available                            local-storage            2m45s
local-pv-53aadb5a                          61Gi       RWO            Delete           Available                            local-storage            2m45s
local-pv-68a5e781                          61Gi       RWO            Delete           Available                            local-storage            2m45s
local-pv-7077366c                          61Gi       RWO            Delete           Available                            local-storage            14m
local-pv-814f6123                          61Gi       RWO            Delete           Available                            local-storage            14m
local-pv-912b9c0a                          61Gi       RWO            Delete           Available                            local-storage            14m
local-pv-a361c37f                          61Gi       RWO            Delete           Available                            local-storage            14m
local-pv-af1f3de0                          61Gi       RWO            Delete           Available                            local-storage            2m45s
local-pv-e6354605                          61Gi       RWO            Delete           Available                            local-storage            14m
local-pv-ec4cb727                          61Gi       RWO            Delete           Available                            local-storage            2m45s

验证:pod.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: local-test
spec:
  serviceName: "local-service"
  replicas: 3
  selector:
    matchLabels:
      app: local-test
  template:
    metadata:
      labels:
        app: local-test
    spec:
      containers:
      - name: test-container
        image: busybox
        command:
        - "/bin/sh"
        args:
        - "-c"
        - "sleep 100000"
        volumeMounts:
        - name: local-vol
          mountPath: /tmp
  volumeClaimTemplates:
  - metadata:
      name: local-vol
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "local-storage"
      resources:
        requests:
          storage: 2Gi

kubectl apply -f pod.yaml

emptyDir

初始化为空目录,只在 Pod 运行时存在,当 Pod 停止或死掉后,目录下的所有数据都会被删除(如果只是 Container 死掉数据不会被删除),可以被同一个 Pod 下的不同 Container 使用

emptyDir 主要用于某些应用程序无需永久保存的临时目录,在多个容器之间共享数据等场景

下面是一个简单的例子,将 emptyDir 的 volume 给 mount 到 /cache 目录

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir: {}

缺省情况下,emptyDir 是在 Pod 运行的主机的文件系统上创建临时目录

此外也可以通过将 emptyDir.medium 设定为 "Memory" 从而使用内存作为文件系统,这种情况速度会比较快,但空间会比较小,而且如果主机重启数据会丢失

hostPath

把 Volume 挂到 host 主机的路径,这和 local 很像,但有区别

host 模式下 Pod 的运行节点是随机的,假设 Pod 第一次起来的时候是在 Node-A,这时会在 Node-A 创建目录挂到 Pod,如果后来 Pod crash 然后重启,可能会被分配到 Node-B,这时会在 Node-B 创建目录挂到 Pod,但是之前保存在 Node-A 的数据就丢失了

local 模式下 Pod 和创建的目录一直是在同一台主机上

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /test-pd
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      # directory location on host
      path: /data
      # this field is optional
      type: Directory

适用场景的例子,比如如果需要读取主机的系统路径 /sys,读取主机的 docker 信息 /var/lib/docker,这些都是固定路径,每台机都有,而且只关心本机的信息

参考:

https://www.cnblogs.com/moonlight-lin/p/13715817.html

https://docs.pingcap.com/zh/tidb-in-kubernetes/stable/configure-storage-class#%E6%9C%AC%E5%9C%B0-pv-%E9%85%8D%E7%BD%AE

https://www.jianshu.com/p/436945a25e9f

相关标签: docker