欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Kubernetes数据持久化之StatefulSet(自动创建PVC)

程序员文章站 2024-03-11 16:01:19
...

一、Kubernetes无状态服务VS有状态服务

1)Kubernetes无状态服务

Kubernetes无状态服务特征:
1)是指该服务运行的实例不会在本地存储需要持久化的数据,并且多个实例对于同一请求响应的结果是完全一致的;
2)多个实例可以共享相同的持久化数据。例如:nginx实例、tomcat实例等;
3)相关的Kubernetes资源有:ReplicaSet、ReplicationController、Deployment等,由于是无状态服务,所以这些控制器创建的Pod名称都是随机性的。并且在缩容时并不会明确缩容某一个Pod,而是随机的,因为所有实例得到的返回值都是一样的,所以缩容任何一个Pod都可以;

2)Kubernetes有状态服务

Kubernetes有状态服务特征:
1)有状态服务可以说是需要数据存储功能的服务、或者指多线程类型的服务、队列等。(比如:mysql数据库、kafka、zookeeper等);
2)每个实例都需要自己独立的持久化存储,并且在Kubernetes中通过声明模板的方式来进行定义。持久卷声明模板在创建pod之前创建,绑定到pod中,模板可以定义多个;
3)相关的Kubernetes资源有:StatefulSet。由于是有状态的服务,所以每个Pod都有特定的名称和网络标识。比如Pod名称是由StatefulSet名+有序的数字组成(0、1、2……);
4)在进行缩容操作时,可以明确知道会缩容那一个Pod,从数字最大的开始。并且StatefulSet在已有实例不健康的情况下是不允许做缩容操作的;

3)无状态服务和有状态服务的区别

主要表现在以下方面:
1)实例数量:无状态服务可以有一个或多个实例,因此支持两种服务容量调节模式;有状态服务职能有一个实例,不允许创建多个实例,因此也不支持服务容量的调节;
2)存储卷:无状态服务可以有存储卷,也可以没有,即使有也无法备份存储卷中的数据;有状态服务必须要有存储卷,并且在创建服务时,必须指定该存储卷分配的磁盘空间大小;
3)数据存储:
无状态服务运行过程中的所有数据(除日志和监控数据)都存在容器实例里的文件系统中,如果实例停止或者删除,则这些数据都将丢失,无法找回;而对于有状态服务,凡是已经挂载了存储卷的目录下的文件内容都可以随时进行备份,备份的数据可以下载,也可以用于恢复新的服务。但对于没有挂载卷的目录下的数据,仍然是无法备份和保存的,如果实例停止或者删除,这些非挂载卷里的文件内容同样会丢失。

4)StatefulSet概述
StatefulSet是Kubernetes提供的管理有状态应用的负载管理控制器API。在Pods管理的基础上,保证Pods的顺序和一致性。与Deployment一样,StatefulSet也是使用容器的Spec来创建Pod,与之不同StatefulSet创建的Pods在生命周期中会保持持久的标记(例如Pod Name)。

5)StatefulSet特点

1)稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现;
2)稳定的网络标志,即Pod重新调度后其PodName和HostName不变,基于Headless Service(即没有Cluster IP的Service)来实现;
3)有序部署,有序扩展,即Pod是有顺序的,在部署或者扩展的时候要依据定义的顺序依次依次进行(即从0到N-1,在下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态),基于init containers来实现;
4)有序收缩,有序删除(即从N-1到0);

二、使用StatefulSet实现自动创建PVC

1)搭建NFS共享存储
为了方便,就直接在master节点上部署NFS存储了!

[[email protected] ~]# yum -y install nfs-utils rpcbind
[[email protected] ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[[email protected] ~]# mkdir /nfsdata
[[email protected] ~]# systemctl start nfs-server
[[email protected] ~]# systemctl start rpcbind
[[email protected] ~]# showmount -e
Export list for master:
/nfsdata *

2)创建rbac授权

[[email protected] ~]# vim rbac-rolebind.yaml
apiVersion: v1                            #创建一个用于认证的服务账号
kind: ServiceAccount
metadata:
  name: nfs-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1        #创建群集规则
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding                #将服务认证用户与群集规则进行绑定
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default                    #必写字段,否则会提示错误
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
[[email protected] ~]# kubectl apply -f rbac-rolebind.yaml 

3)创建nfs-deployment.资源

[[email protected] ~]# vim nfs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1                              #指定副本数量为1
  strategy:
    type: Recreate                      #指定策略类型为重置
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner            #指定rbac yanl文件中创建的认证用户账号
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner     #使用的镜像 
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes             #指定容器内挂载的目录
          env:
            - name: PROVISIONER_NAME           #容器内的变量用于指定提供存储的名称
              value: zjz
            - name: NFS_SERVER                      #容器内的变量用于指定nfs服务的IP地址
              value: 192.168.10.52
            - name: NFS_PATH                       #容器内的变量指定nfs服务器对应的目录
              value: /nfsdata
      volumes:                                                #指定挂载到容器内的nfs的路径及IP
        - name: nfs-client-root
          nfs:
            server: 192.168.10.52
            path: /nfsdata
[[email protected] ~]# kubectl apply -f nfs-deployment.yaml
[[email protected] ~]# kubectl get pod 
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-74cf5dd55f-7fvr5   1/1     Running   0          44s

4)创建SC(Storage Class)

[[email protected] ~]# vim sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: stateful-nfs
  namespace: xiaojiang-test
provisioner: zjz                #这个要和nfs-client-provisioner的env环境变量中的PROVISIONER_NAME的value值对应。
reclaimPolicy: Retain               #指定回收策略为Retain(手动释放)
[[email protected] ~]# kubectl apply -f sc.yaml 
[[email protected] ~]# kubectl get StorageClass
NAME           PROVISIONER   AGE
stateful-nfs   zjz           7s

5)创建Pod

[[email protected] ~]# vim statefulset.yaml 
apiVersion: v1
kind: Service
metadata:
  name: headless-svc                    #从名称就可以是无头服务
  labels:
    app: headless-svc
spec:
  ports:
  - port: 80
    name: myweb
  selector:
    app: headless-pod
  clusterIP: None                        #不分配群集的IP地址,所以不具备负载均衡的能力
---
apiVersion: apps/v1
kind: StatefulSet                          #定义pod中运行的应用
metadata:
  name: statefulset-test
spec:
  serviceName: headless-svc
  replicas: 3
  selector:
    matchLabels:
      app: headless-pod
  template:
    metadata:
      labels:
        app: headless-pod
    spec:
      containers:
      - image: httpd
        name: myhttpd
        ports:
        - containerPort: 80
          name: httpd
        volumeMounts:
        - mountPath: /usr/local/apache2/htdocs
          name: test
  volumeClaimTemplates:                       #定义创建PVC使用的模板
  - metadata:
      name: test
      annotations:  #这是指定storageclass
        volume.beta.kubernetes.io/storage-class: stateful-nfs
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi
[[email protected] ~]# kubectl apply -f statefulset.yaml 
[[email protected] ~]# kubectl get pod  #这里需要等待一会儿
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-74cf5dd55f-7fvr5   1/1     Running   0          4m54s
statefulset-test-0                        1/1     Running   0          94s
statefulset-test-1                        1/1     Running   0          55s
statefulset-test-2                        1/1     Running   0          20s
[[email protected] ~]# kubectl get pv #查看pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                             STORAGECLASS   REASON   AGE
pvc-6c4d3c6f-6279-4ea6-ad8e-10d3098ea0eb   100Mi      RWO            Delete           Bound    default/test-statefulset-test-2   stateful-nfs            61s
pvc-d01c09ab-1399-49d9-bde6-812ad557f952   100Mi      RWO            Delete           Bound    default/test-statefulset-test-0   stateful-nfs            2m15s
pvc-e2257b8f-9deb-4398-85ed-b540a5880e08   100Mi      RWO            Delete           Bound    default/test-statefulset-test-1   stateful-nfs            96s
[[email protected] ~]# kubectl get pvc  #查看pvc
NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-statefulset-test-0   Bound    pvc-d01c09ab-1399-49d9-bde6-812ad557f952   100Mi      RWO            stateful-nfs   2m20s
test-statefulset-test-1   Bound    pvc-e2257b8f-9deb-4398-85ed-b540a5880e08   100Mi      RWO            stateful-nfs   101s
test-statefulset-test-2   Bound    pvc-6c4d3c6f-6279-4ea6-ad8e-10d3098ea0eb   100Mi      RWO            stateful-nfs   66s
[[email protected] ~]# ls /nfsdata/
default-test-statefulset-test-0-pvc-d01c09ab-1399-49d9-bde6-812ad557f952
default-test-statefulset-test-1-pvc-e2257b8f-9deb-4398-85ed-b540a5880e08
default-test-statefulset-test-2-pvc-6c4d3c6f-6279-4ea6-ad8e-10d3098ea0eb
#pod里写测试内容
[[email protected] ~]# echo "0000" > /nfsdata/default-test-statefulset-test-0-pvc-d01c09ab-1399-49d9-bde6-812ad557f952/index.html
[[email protected] ~]# echo "1111" > /nfsdata/default-test-statefulset-test-1-pvc-e2257b8f-9deb-4398-85ed-b540a5880e08/index.html
[[email protected] ~]# echo "2222" > /nfsdata/default-test-statefulset-test-2-pvc-6c4d3c6f-6279-4ea6-ad8e-10d3098ea0eb/index.html
#查看podip进行测试
[[email protected] ~]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-74cf5dd55f-7fvr5   1/1     Running   0          8m49s   10.244.1.2   node01   <none>           <none>
statefulset-test-0                        1/1     Running   0          5m29s   10.244.2.2   node02   <none>           <none>
statefulset-test-1                        1/1     Running   0          4m50s   10.244.1.3   node01   <none>           <none>
statefulset-test-2                        1/1     Running   0          4m15s   10.244.2.3   node02   <none>           <none>
#测试
[[email protected] ~]# curl 10.244.1.3
1111
[[email protected] ~]# curl 10.244.2.3
2222
[[email protected] ~]# curl 10.244.2.2
0000
[[email protected] ~]# curl -I 10.244.2.2
HTTP/1.1 200 OK
Date: Fri, 21 Aug 2020 03:45:16 GMT
Server: Apache/2.4.46 (Unix)
Last-Modified: Fri, 21 Aug 2020 03:33:50 GMT
ETag: "5-5ad5ae7c9eb7a"
Accept-Ranges: bytes
Content-Length: 5
Content-Type: text/html
#可以看出现在提供web页面的服务是Apache

#在pod里相应的目录下查看建立的测试内容
[[email protected] ~]# kubectl exec statefulset-test-0 cat /usr/local/apache2/htdocs/index.html
0000
[[email protected] ~]# kubectl exec statefulset-test-1 cat /usr/local/apache2/htdocs/index.html
1111
[[email protected] ~]# kubectl exec statefulset-test-2 cat /usr/local/apache2/htdocs/index.html
2222
#删除一个pod,测试数据是否会丢失
[[email protected] ~]# kubectl delete pod statefulset-test-2
pod "statefulset-test-2" deleted
#删除之后肯定会建立新的pod
[[email protected] ~]# kubectl get pod
NAME                                      READY   STATUS              RESTARTS   AGE
nfs-client-provisioner-74cf5dd55f-7fvr5   1/1     Running             0          15m
statefulset-test-0                        1/1     Running             0          12m
statefulset-test-1                        1/1     Running             0          11m
statefulset-test-2                        0/1     ContainerCreating   0          12s
#建立新的pod完成
[[email protected] ~]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-74cf5dd55f-7fvr5   1/1     Running   0          15m
statefulset-test-0                        1/1     Running   0          12m
statefulset-test-1                        1/1     Running   0          11m
statefulset-test-2                        1/1     Running   0          18s
#再次查看数据
[[email protected] ~]# kubectl exec statefulset-test-2 cat /usr/local/apache2/htdocs/index.html
2222

6)对pod进行更新并扩容 (注释#的地方注意修改)

[[email protected] ~]# vim statefulset.yaml 
apiVersion: v1
kind: Service
metadata:
  name: headless-svc
  labels:
    app: headless-svc
spec:
  ports:
  - port: 80
    name: myweb
  selector:
    app: headless-pod
  clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: statefulset-test
spec:
  updateStrategy:
    rollingUpdate:
      partition: 2                           #默认值为0(表示所有都会更新),值为2表示第三个pod进行更新
  serviceName: headless-svc
  replicas: 10
  selector:
    matchLabels:
      app: headless-pod
  template:
    metadata:
      labels:
        app: headless-pod
    spec:
      containers:
      - image: nginx                       #更换扩容时使用的镜像
        name: myhttpd
        ports:
        - containerPort: 80
          name: httpd
        volumeMounts:
        - mountPath: /usr/share/nginx/html/                 #更换容器中的主目录
          name: test
  volumeClaimTemplates:
  - metadata:
      name: test
      annotations:  #这是指定storageclass
        volume.beta.kubernetes.io/storage-class: stateful-nfs
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi
[[email protected] ~]# kubectl apply -f statefulset.yaml 
[[email protected] ~]# kubectl get pod -o wide   #需要等待一会儿
NAME                                      READY   STATUS        RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-74cf5dd55f-7fvr5   1/1     Running       0          31m     10.244.1.2   node01   <none>           <none>
statefulset-test-0                        1/1     Running       0          28m     10.244.2.2   node02   <none>           <none>
statefulset-test-1                        1/1     Running       0          28m     10.244.1.3   node01   <none>           <none>
statefulset-test-2                        0/1     Terminating   0          16m     10.244.2.4   node02   <none>           <none>
statefulset-test-3                        1/1     Running       0          2m34s   10.244.1.4   node01   <none>           <none>
statefulset-test-4                        1/1     Running       0          2m10s   10.244.2.5   node02   <none>           <none>
statefulset-test-5                        1/1     Running       0          102s    10.244.1.5   node01   <none>           <none>
statefulset-test-6                        1/1     Running       0          83s     10.244.1.6   node01   <none>           <none>
statefulset-test-7                        1/1     Running       0          63s     10.244.2.6   node02   <none>           <none>
statefulset-test-8                        1/1     Running       0          42s     10.244.1.7   node01   <none>           <none>
statefulset-test-9                        1/1     Running       0          23s     10.244.2.7   node02   <none>           <none>
[[email protected] ~]# ls /nfsdata/
default-test-statefulset-test-0-pvc-d01c09ab-1399-49d9-bde6-812ad557f952
default-test-statefulset-test-1-pvc-e2257b8f-9deb-4398-85ed-b540a5880e08
default-test-statefulset-test-2-pvc-6c4d3c6f-6279-4ea6-ad8e-10d3098ea0eb
default-test-statefulset-test-3-pvc-95724e2b-e30a-4c8c-a56d-9b9aa46d286c
default-test-statefulset-test-4-pvc-8d93522b-ccf6-492e-9adf-93a933214398
default-test-statefulset-test-5-pvc-cfbe5307-e756-487e-b03f-2edc62a86884
default-test-statefulset-test-6-pvc-e965f491-e04e-4488-9ee9-8069b2a224f1
default-test-statefulset-test-7-pvc-b9ca6c86-63f1-4b3a-9461-d1e844029d97
default-test-statefulset-test-8-pvc-7ff2c99a-34fe-4de3-bb70-d842d68b74d7
default-test-statefulset-test-9-pvc-3f01625d-28ba-4655-a3c8-54a4b8a89a81
[[email protected] ~]# ls /nfsdata/ | wc -l
10
[[email protected] ~]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-74cf5dd55f-7fvr5   1/1     Running   0          33m     10.244.1.2   node01   <none>           <none>
statefulset-test-0                        1/1     Running   0          29m     10.244.2.2   node02   <none>           <none>
statefulset-test-1                        1/1     Running   0          29m     10.244.1.3   node01   <none>           <none>
statefulset-test-2                        1/1     Running   0          62s     10.244.2.8   node02   <none>           <none>
statefulset-test-3                        1/1     Running   0          3m45s   10.244.1.4   node01   <none>           <none>
statefulset-test-4                        1/1     Running   0          3m21s   10.244.2.5   node02   <none>           <none>
statefulset-test-5                        1/1     Running   0          2m53s   10.244.1.5   node01   <none>           <none>
statefulset-test-6                        1/1     Running   0          2m34s   10.244.1.6   node01   <none>           <none>
statefulset-test-7                        1/1     Running   0          2m14s   10.244.2.6   node02   <none>           <none>
statefulset-test-8                        1/1     Running   0          113s    10.244.1.7   node01   <none>           <none>
statefulset-test-9                        1/1     Running   0          94s     10.244.2.7   node02   <none>           <none>
#由于配置文件中写的是 partition: 2  ,(2表示第三个pod进行更新),所以statefulset-test-0 statefulset-test-1 statefulset-test-2是apache,后面的是nginx
[[email protected] ~]# curl -I 10.244.2.2
HTTP/1.1 200 OK
Date: Fri, 21 Aug 2020 04:02:20 GMT
Server: Apache/2.4.46 (Unix)
Last-Modified: Fri, 21 Aug 2020 03:33:50 GMT
ETag: "5-5ad5ae7c9eb7a"
Accept-Ranges: bytes
Content-Length: 5
Content-Type: text/html

[[email protected] ~]# curl -I 10.244.1.4
HTTP/1.1 403 Forbidden
Server: nginx/1.19.2
Date: Fri, 21 Aug 2020 04:02:37 GMT
Content-Type: text/html
Content-Length: 153
Connection: keep-alive

由此可以看出在在扩容、缩容过程中,pod的生成或删除操作也是有顺序;并不会更改pod,这就是StatefulSet的特点!