Kubernetes储存:ConfigMap、Secret、Volumes
程序员文章站
2022-05-11 17:03:44
...
一、ConfigMap配置管理
- Configmap用于保存配置数据,以键值对形式存储。
- configMap 资源提供了向 Pod 注入配置数据的方法。
- 旨在让镜像和配置文件解耦,以便实现镜像的可移植性和可复用性
1、创建ConfigMap的方式
1)字面值创建
[[email protected] ~]# kubectl get pod
No resources found in default namespace.
[[email protected] ~]# kubectl get cm
NAME DATA AGE
kube-root-ca.crt 1 24d
[[email protected] ~]# kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2
[[email protected] ~]# kubectl describe cm my-config
2)使用文件创建
[[email protected] ~]# kubectl create configmap my-config-2 --from-file=/etc/resolv.conf
configmap/my-config-2 created
[[email protected] ~]# cat /etc/resolv.conf
nameserver 114.114.114.114
[[email protected] ~]# kubectl describe cm my-config-2
3)使用目录创建
[[email protected] ~]# mkdir congfigmap
[[email protected] ~]# cd congfigmap/
[[email protected] congfigmap]# mkdir test
[[email protected] congfigmap]# cp /etc/resolv.conf test/
[[email protected] congfigmap]# cp /etc/fstab test/
[[email protected] congfigmap]# ls test/
fstab resolv.conf
[[email protected] congfigmap]# kubectl create configmap my-config-3 --from-file=test
configmap/my-config-3 created
[[email protected] congfigmap]# kubectl describe cm my-config-3#有多少文件就会生成多少KV
4)使用编写文件创建
[[email protected] congfigmap]# vim cm1.yaml
[[email protected] congfigmap]# cat cm1.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cm1-config
data:
db_host: "192.168.100.250"
db_port: "3306"
[[email protected] congfigmap]# kubectl apply -f cm1.yaml
configmap/cm1-config created
[[email protected] congfigmap]# kubectl describe cm cm1-config
[[email protected] congfigmap]# kubectl logs pod1
[[email protected] congfigmap]# kubectl describe cm cm-config
2、使用configmap
1)使用configmap设置环境变量
[[email protected] congfigmap]# vim pod1.yaml
[[email protected] congfigmap]# cat pod1.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: pod1
image: busyboxplus
command: ["/bin/sh", "-c", "env"]
env:
- name: key1
valueFrom:
configMapKeyRef:
name: cm1-config
key: db_host
- name: key2
valueFrom:
configMapKeyRef:
name: cm1-config
key: db_port
restartPolicy: Never
[[email protected] congfigmap]# kubectl apply -f pod1.yaml
pod/pod1 created
[[email protected] congfigmap]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod1 0/1 Completed 0 18s
[[email protected] congfigmap]# kubectl logs pod1
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=pod1
SHLVL=1
HOME=/
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
key1=192.168.100.250
key2=3306
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
[[email protected] congfigmap]# kubectl describe cm cm1-config #port和host相对应
[[email protected] congfigmap]# vim pod2.yaml
[[email protected] congfigmap]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod2
spec:
containers:
- name: pod2
image: busyboxplus
command: ["/bin/sh", "-c", "env"]
envFrom:
- configMapRef:
name: cm1-config
restartPolicy: Never
[[email protected] congfigmap]# kubectl apply -f pod2.yaml
pod/pod1 created
[[email protected] congfigmap]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod1 0/1 Completed 0 8m7s
pod2 0/1 Completed 0 78s
[[email protected] congfigmap]# kubectl logs pod2
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
HOSTNAME=pod2
SHLVL=1
HOME=/
db_port=3306
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
db_host=192.168.100.250
2)使用configmap设置命令行参数
[[email protected] congfigmap]# vim pod2.yaml
[[email protected] congfigmap]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod2
spec:
containers:
- name: pod2
image: busyboxplus
command: ["/bin/sh", "-c", "echo $(db_host)"]#把db_port和de_host输出来
envFrom:
- configMapRef:
name: cm1-config
restartPolicy: Never
[[email protected] congfigmap]# kubectl apply -f pod2.yaml
pod/pod2 created
[[email protected] congfigmap]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod2 0/1 Completed 0 11s
[[email protected] congfigmap]# kubectl logs pod2
192.168.100.250
3)通过数据卷使用configmap
[[email protected] congfigmap]# vim pod3.yaml
[[email protected] congfigmap]# cat pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod3
spec:
containers:
- name: pod3
image: busyboxplus
command: ["/bin/sh", "-c", "ls -l /config/"]
volumeMounts:
- name: config-volume
mountPath: /config
volumes:
- name: config-volume
configMap:
name: cm1-config
restartPolicy: Never
[[email protected] congfigmap]# kubectl apply -f pod3.yaml
pod/pod3 created
[[email protected] congfigmap]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod2 0/1 Completed 0 5m1s
pod3 0/1 Completed 0 13s
[[email protected] congfigmap]# kubectl logs pod3
total 0
lrwxrwxrwx 1 root root 14 Feb 24 18:18 db_host -> ..data/db_host
lrwxrwxrwx 1 root root 14 Feb 24 18:18 db_port -> ..data/db_port
[[email protected] congfigmap]# vim pod3.yaml
command: ["/bin/sh", "-c", "cat /config/db_host"]
[[email protected] congfigmap]# kubectl delete pod --all
pod "pod2" deleted
pod "pod3" deleted
[[email protected] congfigmap]# kubectl apply -f pod3.yaml
pod/pod3 created
[[email protected] congfigmap]# kubectl logs pod3
[[email protected] congfigmap]# kubectl delete -f pod3.yaml
4)congfigmap热更新
[[email protected] congfigmap]# vim pod3.yam
lapiVersion: v1
data:
db_host: 192.168.100.100
db_port: "8080"
[[email protected] congfigmap]# kubectl apply -f pod3.yaml
pod/pod3 configured
[[email protected] congfigmap]# kubectl attach pod3 -it
Defaulting container name to pod3.
Use 'kubectl describe pod/pod3 -n default' to see all of the containers in this pod.
If you don't see a command prompt, try pressing enter.
/ # cat /config/*
192.168.100.1008080/ # #pod数据会有几秒的延迟才更新
[[email protected] congfigmap]# kubectl describe cm cm1-config
##滚动触发需要手动
[[email protected] congfigmap]# kubectl delete -f pod3.yaml --force
[[email protected] congfigmap]# kubectl run demo --image=myapp:v1
pod/demo created
[[email protected] congfigmap]# kubectl exec -it demo -- sh
/ # cd /etc/nginx/
conf.d/ modules/
/ # cd /etc/nginx/conf.d/
/etc/nginx/conf.d # ls
default.conf
/etc/nginx/conf.d # cat default.conf #查看主配置文件
[[email protected] congfigmap]# kubectl delete pod demo --force
[[email protected] ~]# cp demo.yml congfigmap/
[[email protected] congfigmap]# vim demo.yml
[[email protected] congfigmap]# cat demo.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:v1
volumeMounts:
- name: config-volume
mountPath: /etc/nginx/conf.d
volumes:
- name: config-volume
configMap:
name: nginx-config
[[email protected] congfigmap]# vim default.conf
[[email protected] congfigmap]# cat default.conf
server {
listen 8080;
server_name _;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
[[email protected] congfigmap]# kubectl create configmap nginx-config --from-file=default.conf
configmap/nginx-config created
[[email protected] congfigmap]# kubectl get cm
NAME DATA AGE
cm1-config 2 161m
kube-root-ca.crt 1 25d
my-config 2 169m
my-config-2 1 168m
my-config-3 2 165m
nginx-config 1 8s
[[email protected] congfigmap]# kubectl apply -f demo.yml
deployment.apps/demo created
[[email protected] congfigmap]# kubectl get pod
NAME READY STATUS RESTARTS AGE
demo-75679c99b4-ggzh5 1/1 Running 0 11s
[[email protected] congfigmap]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-75679c99b4-ggzh5 1/1 Running 0 17s 10.244.1.117 server3 <none> <none>
[[email protected] congfigmap]# curl 10.244.1.117:8080
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[[email protected] congfigmap]# kubectl exec -it demo-75679c99b4-ggzh5 -- sh
/ # cat /etc/nginx/conf.d/default.conf #就是刚才写的default.conf
server {
listen 8080;
server_name _;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
[[email protected] congfigmap]# kubectl edit cm nginx-config #改成80
listen 80;
[[email protected] congfigmap]# kubectl patch deployments.apps demo --patch '{"spec": {"template": {"metadata": {"annotations": {"version/config": "2021022401"}}}}}'##更新,变更参数。手动触发:给demo打一个补丁。以年月日第几次
[[email protected] congfigmap]# kubectl exec -it demo-6d558bcb5f-pjp9w -- sh
/ # cat /etc/nginx/conf.d/default.conf
server {
listen 80;
[[email protected] congfigmap]# kubectl get pod#会把原来的pod删除重建,就变成了80
NAME READY STATUS RESTARTS AGE
demo-6d558bcb5f-pjp9w 1/1 Running 0 36s
#手动触发:也可以直接delete pod
二、Secret管理
- https://kubernetes.io/zh/docs/concepts/configuration/secret/
- Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 ssh key。 敏感信息放在 secret 中比放在 Pod 的定义或者容器镜像中来说更加安全和灵活。
- Pod 可以用两种方式使用 secret:
作为 volume 中的文件被挂载到 pod 中的一个或者多个容器里。
当 kubelet 为 pod 拉取镜像时使用。 - Secret的类型:
Service Account:Kubernetes 自动创建包含访问 API 凭据的 secret,并自动修改 pod 以使用此类型的 secret。
Opaque:使用base64编码存储信息,可以通过base64 --decode解码获得原始数据,因此安全性弱。
kubernetes.io/dockerconfigjson:用于存储docker registry的认证信息。
[[email protected] secret]# kubectl describe sa default
Name: default
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: default-token-xlsq2
Tokens: default-token-xlsq2
Events: <none>
[[email protected] secret]# kubectl get secrets
NAME TYPE DATA AGE
basic-auth Opaque 1 24h
default-token-xlsq2 kubernetes.io/service-account-token 3 25d
tls-secret kubernetes.io/tls 2 25h
[[email protected] secret]# kubectl describe pod demo-6d558bcb5f-pjp9w
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xlsq2 (ro)#默认路径
[[email protected] secret]# kubectl exec demo-6d558bcb5f-pjp9w -- ls /var/run/secrets/kubernetes.io/serviceaccount
ca.crt
namespace
token
1)编写一个secret对象
[[email protected] ~]# mkdir secret
[[email protected] ~]# cd secret/
[[email protected] secret]# vim mysecret.yaml
[[email protected] secret]# cat mysecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: d2VzdG9z
[[email protected] secret]# echo YWRtaW4=|base64 -d
admin[[email protected] secret]# echo d2VzdG9z|base64 -d
westos
westos[[email protected] secret]# kubectl apply -f mysecret.yaml
secret/mysecret created
[[email protected] secret]# kubectl get secrets
NAME TYPE DATA AGE
basic-auth Opaque 1 25h
default-token-xlsq2 kubernetes.io/service-account-token 3 25d
mysecret Opaque 2 10s
2)将secret挂载到卷中
[[email protected] secret]# kubectl delete deployments.apps demo
deployment.apps "demo" deleted
[[email protected] secret]# vim pod1.yaml
[[email protected] secret]# cat pod1.yaml #myapp:v1可以自动运行在后台
apiVersion: v1
kind: Pod
metadata:
name: mysecret
spec:
containers:
- name: demo
image: myapp:v1
volumeMounts:
- name: secrets
mountPath: "/secret"
readOnly: true
volumes:
- name: secrets
secret:
secretName: mysecret
[[email protected] secret]# kubectl apply -f pod1.yaml
[[email protected] secret]# kubectl exec mysecret -- cat /secret/username
[[email protected] secret]# kubectl exec mysecret -- cat /secret/password
3)向指定路径影射secret**
[[email protected] secret]# vim pod1.yaml
[[email protected] secret]# cat pod1.yaml #myapp:v1可以自动运行在后台
apiVersion: v1
kind: Pod
metadata:
name: mysecret
spec:
containers:
- name: demo
image: myapp:v1
volumeMounts:
- name: secrets
mountPath: "/secret"
readOnly: true
volumes:
- name: secrets
secret:
secretName: mysecret
items:
- key: username
path: my-group/my-username
[[email protected] secret]# kubectl apply -f pod1.yaml
[[email protected] secret]# kubectl exec mysecret -- ls /secret
my-group
2)将secret设置为环境变量
[[email protected] secret]# kubectl delete -f pod1.yaml
[[email protected] secret]# vim pod2.yaml
[[email protected] secret]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: secret-env
spec:
containers:
- name: nginx
image: myapp:v1
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
[[email protected] secret]# kubectl apply -f pod2.yaml
pod/secret-env created
[[email protected] secret]# kubectl exec secret-env -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=secret-env
SECRET_USERNAME=admin
SECRET_PASSWORD=westos
#环境变量读取方便,但不能时时更新
##用于存储认证信息
[[email protected] secret]# kubectl delete pod secret-env --force
[[email protected] secret]# vim pod3.yaml
[[email protected] secret]# cat pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: game2048
image: reg.westos.org/westos/game2048
[[email protected] secret]# kubectl apply -f pod3.yaml
[[email protected] secret]# kubectl describe pod mypod#镜像拉取失败,westos是私有仓库需要认证才能拉取镜像
[[email protected] secret]# vim pod3.yaml
[[email protected] secret]# cat pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: game2048
image: reg.westos.org/westos/game2048
imagePullSecrets:
- name: myregistrykey
[[email protected] secret]# kubectl create secret docker-registry myregistrykey --docker-server=reg.westos.org --docker-username=admin --docker-password=westos [email protected]
#create创建myregistrykey
[[email protected] secret]# kubectl get describe secrets -o myregistrykey
[[email protected] secret]# kubectl apply -f pod3.yaml#镜像拉取成功
三、卷Volumes配置管理
- 容器中的文件在磁盘上是临时存放的,这给容器中运行的特殊应用程序带来一些问题。首先,当容器崩溃时,kubelet 将重新启动容器,容器中的文件将会丢失,因为容器会以干净的状态重建。其次,当在一个 Pod 中同时运行多个容器时,常常需要在这些容器之间共享文件。 Kubernetes 抽象出 Volume 对象来解决这两个问题。
- Kubernetes 卷具有明确的生命周期,与包裹它的 Pod 相同。 因此,卷比 Pod 中运行的任何容器的存活期都长,在容器重新启动时数据也会得到保留。 当然,当一个 Pod 不再存在时,卷也将不再存在。也许更重要的是,Kubernetes 可以支持许多类型的卷,Pod 也能同时使用任意数量的卷。
- 卷不能挂载到其他卷,也不能与其他卷有硬链接。 Pod 中的每个容器必须独立地指定每个卷的挂载位置。
- k8s支持的卷:官网:https://applkdmhnt09730.h5.xiaoeknow.com/v1/course/alive/l_60361e1be4b0867086c0ffb0?type=2&pro_id=p_5fed4801e4b0c4f2bc4f2899
1、emptyDir卷,空卷
当 Pod 指定到某个节点上时,首先创建的是一个 emptyDir 卷,并且只要 Pod 在该节点上运行,卷就一直存在。 就像它的名称表示的那样,卷最初是空的。 尽管 Pod 中的容器挂载 emptyDir 卷的路径可能相同也可能不同,但是这些容器都可以读写 emptyDir 卷中相同的文件。 当 Pod 因为某些原因被从节点上删除时,emptyDir 卷中的数据也会永久删除。
kubectl get pod
delete pod mypod --force
mkdir volumes
cd volumes
vim empotydir.yaml
[[email protected] ~]# mkdir volumes
[[email protected] ~]# cd volumes/
[[email protected] volumes]# vim empotydir.yaml
[[email protected] volumes]# cat empotydir.yaml
apiVersion: v1
kind: Pod
metadata:
name: vol1
spec:
containers:
- image: busyboxplus
name: vm1
stdin: true
tty: true#没有时间限制,一直跑true
volumeMounts:
- mountPath: /cache
name: cache-volume
- name: vm2
image: myapp:v1
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
volumes:
- name: cache-volume
emptyDir:
medium: Memory
sizeLimit: 100Mi
[[email protected] volumes]# kubectl apply -f empotydir.yaml
pod/vol1 created
[[email protected] volumes]# kubectl get pod
NAME READY STATUS RESTARTS AGE
vol1 0/2 Pending 0 8s
[[email protected] volumes]# kubectl describe pod vol1
[[email protected] volumes]# kubectl get pod -o wide#ip是10.244.141.218
[[email protected] volumes]# curl 10.244.141.218#通
[[email protected] volumes]# kubectl attach vol1 -c vm1 -it#-c指定容器
/ # ip addr #卷的数据在pod内所有容器间是共享的,ip不变
/ # echo www.westos.org > index.html
/ # curl localhost
/ # www.westos.org
[[email protected] volumes]# curl 10.244.141.218#vm1里的改变时。vm2里也会作相应改变
www.westos.org
##emptyDir缺点,超过内存被驱离,有风险
[[email protected] volumes]# kubectl attach vol1 -c vm1 -it#-c指定容器
/ # dd if=/dev/zero of=bigfile ba=1M count=200#直接耗的物理内存200M,超过本身100M的内存
[[email protected] volumes]# kubectl get pod -w#等一会
[[email protected] volumes]# kubectl get pod #被驱离Evicted
[[email protected] volumes]# kubectl delete pod vol1
2、hostPath卷
- hostPath卷,为一些程序提供了一个强大的逃生仓
- server3和server4创建目录westos,分别创建文件file1,file2,挂载路径westos一样,但访问文件内容不一样
- 超户才能写
[[email protected] volumes]# vim hostpath.yaml
[[email protected] volumes]# cat hostpath.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: myapp:v1
name: vm1
volumeMounts:
- mountPath: /usr/share/nginx/html
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /webdata
type: DirectoryOrCreate
[[email protected] volumes]# kubectl apply -f hostpath.yaml
[[email protected] ~]# cd /webdata
[[email protected] webdata]# echo www.westos.org > index.html
[[email protected] volumes]# kubectl get pod -o wide#ip是10.244.141.219
[[email protected] volumes]# curl 10.244.141.219#通
www.westos.org
[[email protected] volumes]# kubectl delete -f hostpath.yaml#删掉pod,server3上的路径webdata不会被删除
3、nfs
[[email protected] volumes]# vim nfs.yaml
apiVersion: v1
kind: Pod
metadata:
name: nfs-pd
spec:
containers:
- image: myapp:v1
name: vm1
volumeMounts:
- mountPath: /usr/share/nginx/html
name: test-volume
volumes:
- name: test-volume
nfs:
server: 172.25.0.1#相当于客户端server1挂载nfs
path: /nfsdata
[[email protected] ~]# yum install nfs-utils -y
[[email protected] ~]# vim /etc/exports
[[email protected] etc]# cat /etc/exports
/nfsdata *(rw,no_root_squash)
[[email protected] ~]# systemctl enable --now nfs
[[email protected] ~]# showmount -e
Export list for server1:
/nfsdata *
[[email protected] volumes]# kubectl apply -f nfs.yaml
[[email protected] ~]# yum install nfs-utils -y
[[email protected] volumes]# kubectl get pod
[[email protected] volumes]# kubectl describe pod nfs-pd
[[email protected] volumes]# kubectl get pod -o wide##nfs-pdip是10.244.141.220
[[email protected] volumes]# curl 10.244.141.220#通
[[email protected] ~]# mkdir /nfsdata
[[email protected] ~]# cd /nfsdata
[[email protected] nfsdata]# echo www.westos.org> index.html
[[email protected] volumes]# curl 10.244.141.220#通,等一会刷新
www.westos.org
4、persistentVolume持久卷VC,pv、pvc、pod之间的关系
1)创建持久卷
[[email protected] volumes]# kubectl get pv#没有东西
[[email protected] volumes]# kubectl delete -f nfs.yaml
[[email protected] ~]# df -h /#看大小
[[email protected] volumes]# vim pv1.yaml
[[email protected] volumes]# cat pv1.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem#访问呢的是文件
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfsdata
server: 172.25.0.1
[[email protected] volumes]# kubectl apply -f pv1.yaml
[[email protected] volumes]# kubectl get pv#有pv1
[[email protected] ~]# cd /nfsdata
[[email protected] nfsdata]# mkdir pv1 pv2 pv3
[[email protected] volumes]# vim pv1.yaml
[[email protected] volumes]# cat pv1.yaml#访问模式accessMdes,三个pv,三个模式ReadWriteOnce、ReadWriteMany、ReadOnlyMany,三个挂载点/nfsdata/pv1、/nfsdata/pv2、/nfsdata/pv3
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfsdata/pv1
server: 172.25.0.1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfsdata/pv2
server: 172.25.0.1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv3
spec:
capacity:
storage: 20Gi
volumeMode: Filesystem
accessModes:
- ReadOnlyMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfsdata/pv3
server: 172.25.0.1
[[email protected] volumes]# kubectl apply -f pv1.yaml
[[email protected] volumes]# kubectl get pv#有3个pv1、pv2、pv3,三个模式,三个挂载点
2)创建pvc
[[email protected] volumes]# kubectl get pv#storyclassname:nfs
[[email protected] volumes]# vim pvc.yaml
[[email protected] volumes]# cat pvc.yaml#三个条件同时满足:大小storage,storageClassName,模式ReadWriteOnce
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
storageClassName: nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
[[email protected] volumes]# kubectl apply -f pvc.yaml
[[email protected] volumes]# kubectl get pv#pv1status:bound已经被绑定了
[[email protected] volumes]# kubectl get pvc#有pv1
[[email protected] volumes]# vim pvc.yaml
[[email protected] volumes]# cat pvc.yaml#pod内绑定pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
storageClassName: nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: myapp:v1
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nfs-pv
volumes:
- name: nfs-pv
persistentVolumeClaim:
claimName: pvc1
[[email protected] volumes]# kubectl apply -f pvc.yaml
[[email protected] volumes]# kubectl get pv
[[email protected] volumes]# kubectl get pvc
[[email protected] volumes]# kubectl get pod -o wide#看ip10.244.141.222
[[email protected] volumes]# curl 10.244.141.222#不通
[[email protected] nfsdata]# cd pv1
[[email protected] pv1]# echo www.westos.reg > index.html
[[email protected] nfsdata]# cd pv2
[[email protected] pv2]# echo www.rehat.reg > index.html
[[email protected] volumes]# vim pvc.yaml#添加pod绑定pv2
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
storageClassName: nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc2
spec:
storageClassName: nfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: myapp:v1
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nfs-pv
volumes:
- name: nfs-pv
persistentVolumeClaim:
claimName: pvc1
---
apiVersion: v1
kind: Pod
metadata:
name: test-pd-2
spec:
containers:
- image: myapp:v1
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nfs-pv-2
volumes:
- name: nfs-pv-2
persistentVolumeClaim:
claimName: pvc2
[[email protected] volumes]# kubectl apply -f pvc.yaml
[[email protected] volumes]# kubectl get pvc#pv1,pv2都绑定了,不同的pod绑定不同的pvc
[[email protected] volumes]# kubectl get pod -o wide#看两个iptest-pd:10.244.141.226 test-pd-2:10.244.22.6
[[email protected] volumes]# curl 10.244.141.226#两个ip,不一样
www.westos.reg
[[email protected] volumes]# curl 10.244.141.226
www.rehat.reg
5、动态卷 NFS client provisioner
delete -f pvc.yaml
delete -f pv1.yaml
[[email protected] nfsdata]# rm -fr *
[[email protected] nfsdata]# docker pull heegor/nfs-subdir-external-provisioner:v4.0.0 #使用最新版v4.0,下载慢
[[email protected] nfsdata]# docker tag heegor/nfs-subdir-external-provisioner:v4.0.0 reg.westos.org/library/nfs-subdir-external-provisioner:v4.0.0
[[email protected] nfsdata]# docker push reg.westos.org/library/nfs-subdir-external-provisioner:v4.0.0
#直接复制文件内容:https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy
#新版:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/tree/master/deploy
[[email protected] volumes]# mkdir nfs-client
[[email protected] volumes]# cd nfs-client
[[email protected] nfs-client]# \vi nfs-client-provisioner.yaml
[[email protected] nfs-client]# cat nfs-client-provisioner.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: nfs-subdir-external-provisioner:v4.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 172.25.0.1
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 172.25.0.1
path: /nfsdata
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "true"#delete后再server1上nfsdata还有打包
[[email protected] nfs-client]# kubectl apply -f nfs-client-provisioner.yaml
[[email protected] nfs-client]# kubectl get pod#生成pod:managed-nfs-storage
[[email protected] nfs-client]# kubectl describe pod managed-nfs-storage
[[email protected] nfs-client]# kubectl logs nfs
[[email protected] nfs-client]# kubectl get sc
[[email protected] nfs-client]# describe sc managed-nfs-storage
#清理环境。现在没有任何pv和pvc:kubectl get pv,kubectl get pvc
[[email protected] nfs-client]# vim pvc.yaml
[[email protected] nfs-client]# cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
[[email protected] nfs-client]# kubectl apply -f pvc.yaml
[[email protected] nfs-client]# kubectl get pvc#有test-claim,被绑定bound
[[email protected] nfs-client]# kubectl get pv#还会自动创建pv
[[email protected] nfsdata]# ls
default-test-claim-pvc....
[[email protected] nfsdata]# cd default-test-claim-pvc...
[[email protected] default-test-claim-pvc...]# echo www.westos.org > index.html
[[email protected] nfs-client]# vim pvc.yaml
[[email protected] nfs-client]# cat pvc.yaml#加pod
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
---
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: myapp:v1
volumeMounts:
- name: nfs-pvc
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
[[email protected] nfs-client]# kubectl apply -f pvc.yaml
[[email protected] nfs-client]# kubectl get pod -o wide#看test-pod的ip:10.244.22.10
[[email protected] nfs-client]# curl 10.244.22.10
www.westos.org
[[email protected] nfs-client]# cp pvc.yaml demo.yaml
[[email protected] nfs-client]# vim demo.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim-2
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
---
kind: Pod
apiVersion: v1
metadata:
name: test-pod-2
spec:
containers:
- name: test-pod
image: myapp:v1
volumeMounts:
- name: nfs-pvc
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim-2
[[email protected] nfs-client]# kubectl get pvc#有两个test-claim、test-claim-2
[[email protected] nfs-client]# kubectl get pv#有两个default/test-claim、default/test-claim-2
[[email protected] nfsdata]# ls
default-test-claim-pvc....
[[email protected] nfsdata]# cd default-test-claim-2-pvc...
[[email protected] default-test-claim-2-pvc...]# echo www.rehat.org > index.html
[[email protected] nfs-client]# kubectl get pod -o wide#看test-pod-2的ip:10.244.141.230
[[email protected] nfs-client]# curl
www.rehat.org
[[email protected] nfs-client]# kubectl delete -f pvc.yaml#delete后再server1上nfsdata还有打包
#重新整合到一个namespace中
[[email protected] nfs-client]# kubectl delete -f demo.yaml
[[email protected] nfs-client]# kubectl delete -f nfs-client-provisioner.yaml
[[email protected] nfs-client]# kubectl create namespace nfs-client-provisioner
[[email protected] nfs-client]# vim nfs-client-provisioner.yaml
%s/default/nfs-client-provisioner/g
[[email protected] nfs-client]# kubectl apply -f nfs-client-provisioner.yaml
[[email protected] nfs-client]# kubectl get ns#重新整合到一个namespace中
nfs-client-provisioner Active 58s
[[email protected] nfs-client]# kubectl apply -f pvc.yaml
[[email protected] nfs-client]# kubectl get pvc#test-claim
[[email protected] nfs-client]# kubectl get pv
[[email protected] nfs-client]# kubectl get pod #都已生成,没问题
#不能自动的找到存储类
%%默认的stroageclass
[[email protected] nfs-client]# vim demo.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim-2
spec:
#storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
[[email protected] nfs-client]# kubectl patch stroageclass managed-nfs-storage -p '{"metadata":{"annotations":{"stroageclass.kubernetes.io/is-default-class":"true"}}}打补丁
[[email protected] nfs-client]# kubectl delete -f demo.yaml
[[email protected] nfs-client]# kubectl apply -f demo.yaml
[[email protected] nfs-client]# kubectl get pvc#就会正常
[[email protected] nfs-client]# kubectl get sc#默认的default
6、控制器StatefulSet
##每个pod有自己独立的数据存储
[[email protected] volumes]# kubectl delete -f demo.yaml
[[email protected] volumes]# kubectl delete -f pvc.yaml
get pod、pv、pvc#都是空的
[[email protected] volumes]# mkdir statefulset
[[email protected] tatefulset]# cd statefulset
[[email protected] statefulset]# vim service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx-svc"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: myapp:v1
ports:
- containerPort: 80
name: web
[[email protected] volumes]# kubectl apply -f service.yaml
[[email protected] volumes]# kubectl get svc#有nginx-svc
[[email protected] statefulset]# vim service.yaml
replicas: 6
[[email protected] statefulset]# kubectl apply -f service.yaml
[[email protected] statefulset]# kubectl get pod#生成和删除都是one by one
[[email protected] statefulset]# kubectl describe svc nginx-svc
[[email protected] statefulset]# dig -t A exsvc.default.svc.cluster.local. @10.96.0.10
[[email protected] statefulset]# kubectl run demo --image=busyboxplus -it
/ # nslookup nginx-svc
/ # curl nginx-svc#都通
/ # curl nginx-svc/hostname.html#每个pod有自己独立的ip
/ # curl web-0.nginx-svc/hostnam.html#能够拿到每个pod固定的域名
[[email protected] statefulset]# vim service.yaml
replicas: 0#0表示回收,创建的时候改个数就行
[[email protected] statefulset]# kubectl apply -f service.yaml
[[email protected] statefulset]# kubectl get pod #没了
[[email protected] statefulset]# vim service.yaml
=2#创建的时候改个数就行
[[email protected] statefulset]# kubectl apply -f service.yaml
[[email protected] statefulset]# kubectl get pod #web-0、web-1
[[email protected] statefulset]# kubectl attach demo -it
/ # nslookup nginx-svc
/ # curl nginx-svc#都通
/ # curl nginx-svc/hostname.html#每个pod有自己独立的ip
/ # curl web-0.nginx-svc/hostnam.html#能够拿到每个pod固定的域名
StatefulSet结合pv和pvc,每个pod都已一个卷
[[email protected] statefulset]# vim service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx-svc"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: myapp:v1
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
[[email protected] statefulset]# kubectl delete -f service.yaml#要删除重建才生效
[[email protected] statefulset]# kubectl apply -f service.yaml
[[email protected] statefulset]# kubectl get pod
[[email protected] statefulset]# kubectl describe pod web-0
[[email protected] statefulset]# kubectl get pv
[[email protected] statefulset]# kubectl get pvc
[[email protected] nfsdata]# ls
default-test-claim-pvc....
default-test-claim-2-pvc...
default-www-web-0-pvc-....
default-www-web-1-pvc-....
[[email protected] nfsdata]# cd default-www-web-0-pvc-....
[[email protected] default-www-web-0-pvc-....]# echo web-0 > index.html
[[email protected] nfsdata]# cd default-www-web-1-pvc-....
[[email protected] default-www-web-1-pvc-....]# echo web-1 > index.html
[[email protected] statefulset]# kubectl get pod -o wide#看两个ip:web-0:10.244.22.18;web-1:10.244.141.237
[[email protected] statefulset]# kubectl attach demo -it
curl nginx-svc
web-0
curl nginx-svc
web-1
curl nginx-svc
curl web-0.nginx-svc
curl web-1.nginx-svc
[[email protected] statefulset]# vim service.yaml
replicas:0
[[email protected] statefulset]# kubectl apply -f service.yaml
#pod都被回收,pv和pvc都在
[[email protected] statefulset]# kubectl get pod#pod都被回收
[[email protected] statefulset]# vim service.yaml
replicas:2
[[email protected] statefulset]# kubectl apply -f service.yaml
[[email protected] statefulset]# kubectl get pod
[[email protected] statefulset]# kubectl attach demo -it
/ # curl web-0.nginx-svc#还是原来的,没有重建。不删除控制器就不会重建
-
StatefulSet给所有的Pod进行了编号,编号规则是: ( s t a t e f u l s e t 名 称 ) − (statefulset名称)- (statefulset名称)−(序号),从0开始。
-
Pod被删除后重建,重建Pod的网络标识也不会改变,Pod的拓扑状态按照Pod的“名字+编号”的方式固定下来,并且为每个Pod提供了一个固定且唯一的访问入口,即Pod对应的DNS记录。
7、statefulset部署mysql主从集群
1)kubectl弹缩
https://kubernetes.io/zh/docs/tasks/run-application/run-replicated-stateful-application/
[[email protected] ]# mkdir mysql
[[email protected] ]# cd mysql
[[email protected] mysql]# \vi configmap.yaml#\vi有特殊符号要转至
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
labels:
app: mysql
data:
master.cnf: |
# Apply this config only on the master.
[mysqld]
log-bin
slave.cnf: |
# Apply this config only on slaves.
[mysqld]
super-read-only
[[email protected] mysql]# kubectl apply -f configmap.yaml
[[email protected] mysql]# kubectl describe cm mysql
[[email protected] mysql]# kubectl delete -f service.yaml
[[email protected] mysql]# kubectl delete pvc --all
[[email protected] mysql]# kubectl delete pod demo --force#此时pod为空
[[email protected] mysql]# kubectl get cm
[[email protected] mysql]# kubectl delete cm cm1-config my-config my-config-2 my-config-3
[[email protected] mysql]# kubectl get cm#只有两个:默认的,和mysql
[[email protected] mysql]# vim service.yaml
#创建两个svc
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
clusterIP: None
selector:
app: mysql
---
apiVersion: v1
kind: Service
metadata:
name: mysql-read
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
selector:
app: mysql
[[email protected] mysql]# kubectl apply -f service.yaml
[[email protected] mysql]# kubectl get pod
[[email protected] mysql]# kubectl get svc
[[email protected]]# docker pull mysql:5.7#要拉取5.7版本
[[email protected]]# docker tag mysql:5.7 reg.westos.org/library/mysql:5.7
[[email protected]]# docker push reg.westos.org/library/mysql:5.7
#创建一个初始化容器
free -m # requests:cpu和memory
[[email protected] mysql]# \vi statefulset.yaml
[[email protected] file_recv]# cat statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d/
else
cp /mnt/config-map/slave.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on master (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 512Mi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave. (Need to remove the tailing semicolon!)
cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_slave_info xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm -f xtrabackup_binlog_info xtrabackup_slave_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
mysql -h 127.0.0.1 \
-e "$(<change_master_to.sql.in), \
MASTER_HOST='mysql-0.mysql', \
MASTER_USER='root', \
MASTER_PASSWORD='', \
MASTER_CONNECT_RETRY=10; \
START SLAVE;" || exit 1
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
[[email protected] mysql]# kubectl apply -f statefulset.yaml
[[email protected] mysql]# kubectl get pod#mysql-0
[[email protected] mysql]# kubectl describe svc mysql-read
[[email protected] mysql]# kubectl describe pod mysql-0
[[email protected] mysql]# kubectl logs mysql-0
[[email protected] mysql]# kubectl logs mysql-0 -c init-mysql
[[email protected] mysql]# kubectl get pod -o wide#看Ip:10.244.22.21
[[email protected] mysql]# yum install mariadb -y
[[email protected] mysql]# mysql -h 10.244.22.21
show databases;
create database westos;
[[email protected] ]# docker pull xtrabackup:1.0
[[email protected] ]# docker tag xtrabackup:1.0 reg.westos.org/library/xtrabackup:1.0
[[email protected] ]# docker push reg.westos.org/library/xtrabackup:1.0
#主库持续打开3307端口与外部链接
#创建第二个初始化容器
[[email protected] mysql]# kubectl delete -f statefulset.yaml
[[email protected] mysql]# vim statefulset.yaml
replicas: 2#起两个mysql-0、mysql-1
[[email protected] mysql]# kubectl apply -f statefulset.yaml
[[email protected] mysql]# kubectl get pod
myaql-0 2/2 Running 0 33s
mysql-1 2/2 Running 1 24s
[[email protected] mysql]# kubectl logs mysql-0 -c init-mysql#cp mysql.cnf
[[email protected] mysql]# kubectl logs mysql-1 -c init-mysql#cp slave.cnf
[[email protected] mysql]# kubectl logs mysql-0 -c clone-mysql#主库有。所以直接跳过
#主库上开个3307端口,用于从库连接
[[email protected] mysql]# kubectl logs mysql-1 -c clone-mysql
[[email protected] mysql]# vim statefulset.yaml
replicas: 3
[[email protected] mysql]# kubectl get pod
myaql-0 2/2 Running 0 11m
mysql-1 2/2 Running 1 10m
mysql-2 2/2 Running 1 24s
[[email protected] mysql]# kubectl logs mysql-2 -c clone-mysql#mysql-2克隆的是mysql-1:有 slave.cnf
#mysql-1的3307端口一直在等待,如果被mysql-2克隆就会启动3307端口
[[email protected] mysql]# kubectl mysql-read
[[email protected] mysql]# kubectl get pod -o wide#mysql-2的ip
[[email protected] mysql]# mysql -h mysql-2的ip
show databases;#有westos、xtrabackup
[[email protected] mysql]# kubectl describe svc mysql-read#ip为10.107.162.137
[[email protected] mysql]# mysql -h 10.107.162.137
show databases;#有westos、xtrabackup
[[email protected] mysql]# kubectl logs mysql-0 -c xtrabackup|less
[[email protected] mysql]# kubectl logs mysql-1 -c xtrabackup|less
[[email protected] mysql]# kubectl logs mysql-2 -c xtrabackup|less
上一篇: 51Nod 1081 子段求和
下一篇: Kubernetes — Volumes