kubernetes(k8s) 学习 (十四) k8s存储之 持久卷(PV) (动态pv+statefulset)
文章目录
动态pv简介
Kubernetes支持动态供给的存储插件:
https://kubernetes.io/docs/concepts/storage/storage-classes/
Dynamic Provisioning机制工作的核心在于StorageClass的API对象。
StorageClass声明存储插件,用于自动创建PV。
StorageClass提供了一种描述存储类(class)
的方法,不同的class可能会映射到不同的服务质量等级和备份策略或其他策略等。
每个 StorageClass 都包含 provisioner、parameters 和 reclaimPolicy
字段, 这些字段会在StorageClass需要动态分配 PersistentVolume 时会使用到。
StorageClass的属性
Provisioner(存储分配器):
用来决定使用哪个卷插件分配 PV,该字段必须指定。可以指定内部分配器,也可以指定外部分配器。外部分配器的代码地址为: kubernetes-incubator/external-storage,其中包括NFS和Ceph等。
Reclaim Policy(回收策略):
通过reclaimPolicy字段指定创建的Persistent Volume的回收策略,回收策略包括:Delete 或者 Retain,没有指定默认为Delete。
更多属性查看:https://kubernetes.io/zh/docs/concepts/storage/storage-classes/
NFS Client Provisioner是一个automatic provisioner,使用NFS作为存储,自动创建PV和对应的PVC,本身不提供NFS存储,需要外部先有一套NFS存储服务。
PV以 ${namespace}-${pvcName}-${pvName}的命名格式提供(在NFS服务器上)
PV回收的时候以 archieved-${namespace}-${pvcName}-${pvName}
的命名格式(在NFS服务器上)
nfs-client-provisioner源码地址:https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
NFS动态分配PV示例
1.配置授权(基于rbac的角色控制):
$ vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
$ kubectl create -f rbac.yaml
2.部署NFS Client Provisioner
$ vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
3.创建 NFS SotageClass
$ vim class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: westos.org/nfs
archiveOnDelete: "false"
$ kubectl create -f class.yaml
$ kubectl get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage westos.org/nfs Delete Immediate false 70m
1.
2.
4.创建PVC:
$ vim test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-pv1
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
$ kubectl create -f test-claim.yaml
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pv1 Bound pvc-394e57eb-1467-4857-84fd-509266effdbd 100Mi RWX managed-nfs-storage 6s
更改class中的参数
5.创建测试Pod:
$ vim test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: nfs-pv1
再添加一个pvc
默认的 StorageClass 将被用于动态的为没有特定 storage class 需求的 PersistentVolumeClaims 配置存储:(只能有一个默认StorageClass)
如果没有默认StorageClass,PVC 也没有指定storageClassName 的值,那么意味着它只能够跟 storageClassName 也是“”的 PV 进行绑定。
$ kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
$ kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
global (default) westos.org/nfs Delete Immediate false 5s
managed-nfs-storage westos.org/nfs Delete Immediate false 3h7m
StatefulSet如何通过Headless Service维持Pod的拓扑状态
1.创建Headless service
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
1.
2.创建statefulset控制器
StatefulSet控制器
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
3.
$ kubectl create -f statefulset.yaml
service/nginx-svc created
statefulset.apps/web created
$ kubectl get statefulsets.apps
NAME READY AGE
web 3/3 2m56s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d20h
nginx-svc ClusterIP None <none> 80/TCP 14s
$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
web-0 1/1 Running 0 7s 10.244.1.122 server2
web-1 1/1 Running 0 6s 10.244.2.113 server3
web-2 1/1 Running 0 5s 10.244.0.62 server1
StatefulSet将应用状态抽象成了两种情况:
拓扑状态:应用实例必须按照某种顺序启动。新创建的Pod必须和原来Pod的网络标识一样
存储状态:应用的多个实例分别绑定了不同存储数据。
StatefulSet给所有的Pod进行了编号,编号规则是:(序号),从0开始。
Pod被删除后重建,重建Pod的网络标识也不会改变,Pod的拓扑状态按照Pod的“名字+编号”的方式固定下来,并且为每个Pod提供了一个固定且唯一的访问入口,即Pod对应的DNS记录
$ dig -t A web-0.nginx-svc.default.svc.cluster.local @10.96.0.10
...
;; ANSWER SECTION:
web-0.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.1.122
StatefulSet控制器
PV和PVC的设计,使得StatefulSet对存储状态的管理成为了可能
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx-svc"
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
storageClassName: nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
od的创建也是严格按照编号顺序进行的。比如在web-0进入到running状态,并且Conditions为Ready之前,web-1一直会处于pending状态。
1.
StatefulSet还会为每一个Pod分配并创建一个同样编号的PVC。这样,kubernetes就可以通过Persistent Volume机制为这个PVC绑定对应的PV,从而保证每一个Pod都拥有一个独立的Volumea
kubectl 弹缩
首先,想要弹缩的StatefulSet. 需先清楚是否能弹缩该应用.
$ kubectl get statefulsets <stateful-set-name>
改变StatefulSet副本数量:
$ kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
如果StatefulSet开始由 kubectl apply 或 kubectl create --save-config 创建,更新StatefulSet manifests中的 .spec.replicas, 然后执行命令 kubectl apply:
$ kubectl apply -f <stateful-set-file-updated>
也可以通过命令 kubectl edit 编辑该字段:
$ kubectl edit statefulsets <stateful-set-name>
使用 kubectl patch:
$ kubectl patch statefulsets <stateful-set-name> -p '{"spec":{"replicas":<new-replicas>}}'