附013.Kubernetes永久存储Rook部署
程序员文章站
2022-06-25 08:38:25
一Rook概述 1.1Ceph简介 Ceph是一种高度可扩展的分布式存储解决方案,提供对象、文件和块存储。在每个存储节点上,将找到Ceph存储对象的文件系统和Ceph OSD(对象存储守护程序)进程。在Ceph集群上,还存在Ceph MON(监控)守护程序,它们确保Ceph集群保持高可用性。 更 ......
一 rook概述
1.1 ceph简介
ceph是一种高度可扩展的分布式存储解决方案,提供对象、文件和块存储。在每个存储节点上,将找到ceph存储对象的文件系统和ceph osd(对象存储守护程序)进程。在ceph集群上,还存在ceph mon(监控)守护程序,它们确保ceph集群保持高可用性。
更多ceph介绍参考:https://www.cnblogs.com/itzgr/category/1382602.html
1.2 rook简介
rook 是一个开源的cloud-native storage编排, 提供平台和框架;为各种存储解决方案提供平台、框架和支持,以便与云原生环境本地集成。目前主要专用于cloud-native环境的文件、块、对象存储服务。它实现了一个自我管理的、自我扩容的、自我修复的分布式存储服务。
rook支持自动部署、启动、配置、分配(provisioning)、扩容/缩容、升级、迁移、灾难恢复、监控,以及资源管理。为了实现所有这些功能,rook依赖底层的容器编排平台,例如 kubernetes、coreos 等。。
rook 目前支持ceph、nfs、minio object store、edegefs、cassandra、cockroachdb 存储的搭建。
rook机制:
- rook 提供了卷插件,来扩展了 k8s 的存储系统,使用 kubelet 代理程序 pod 可以挂载 rook 管理的块设备和文件系统。
- rook operator 负责启动并监控整个底层存储系统,例如 ceph pod、ceph osd 等,同时它还管理 crd、对象存储、文件系统。
- rook agent 代理部署在 k8s 每个节点上以 pod 容器运行,每个代理 pod 都配置一个 flexvolume 驱动,该驱动主要用来跟 k8s 的卷控制框架集成起来,每个节点上的相关的操作,例如添加存储设备、挂载、格式化、删除存储等操作,都有该代理来完成。
更多参考如下官网:
https://rook.io
https://ceph.com/
1.3 rook架构
rook架构如下:
kubernetes集成rook架构如下:
二 rook部署
2.1 前期规划
提示:本实验不涉及kubernetes部署,kubernetes部署参考《附012.kubeadm部署高可用kubernetes》。
裸磁盘规划
2.2 获取yaml
[root@k8smaster01 ~]# git clone https://github.com/rook/rook.git
2.3 部署rook operator
本实验使用k8snode01——k8snode03三个节点,因此需要如下修改:
[root@k8smaster01 ceph]# kubectl taint node k8smaster01 node-role.kubernetes.io/master="":noschedule
[root@k8smaster01 ceph]# kubectl taint node k8smaster02 node-role.kubernetes.io/master="":noschedule
[root@k8smaster01 ceph]# kubectl taint node k8smaster03 node-role.kubernetes.io/master="":noschedule
[root@k8smaster01 ceph]# kubectl label nodes {k8snode01,k8snode02,k8snode03} ceph-osd=enabled
[root@k8smaster01 ceph]# kubectl label nodes {k8snode01,k8snode02,k8snode03} ceph-mon=enabled
[root@k8smaster01 ceph]# kubectl label nodes k8snode01 ceph-mgr=enabled
提示:当前版本rook中mgr只能支持一个节点运行。
[root@k8smaster01 ~]# cd /root/rook/cluster/examples/kubernetes/ceph/
[root@k8smaster01 ceph]# kubectl create -f common.yaml
[root@k8smaster01 ceph]# kubectl create -f operator.yaml
解读:如上创建了相应的基础服务(如serviceaccounts),同时rook-ceph-operator会在每个节点创建 rook-ceph-agent 和 rook-discover。
2.4 配置cluster
[root@k8smaster01 ceph]# vi cluster.yaml
1 apiversion: ceph.rook.io/v1 2 kind: cephcluster 3 metadata: 4 name: rook-ceph 5 namespace: rook-ceph 6 spec: 7 cephversion: 8 image: ceph/ceph:v14.2.4-20190917 9 allowunsupported: false 10 datadirhostpath: /var/lib/rook 11 skipupgradechecks: false 12 mon: 13 count: 3 14 allowmultiplepernode: false 15 dashboard: 16 enabled: true 17 ssl: true 18 monitoring: 19 enabled: false 20 rulesnamespace: rook-ceph 21 network: 22 hostnetwork: false 23 rbdmirroring: 24 workers: 0 25 placement: #配置特定节点亲和力保证node作为存储节点 26 # all: 27 # nodeaffinity: 28 # requiredduringschedulingignoredduringexecution: 29 # nodeselectorterms: 30 # - matchexpressions: 31 # - key: role 32 # operator: in 33 # values: 34 # - storage-node 35 # tolerations: 36 # - key: storage-node 37 # operator: exists 38 mon: 39 nodeaffinity: 40 requiredduringschedulingignoredduringexecution: 41 nodeselectorterms: 42 - matchexpressions: 43 - key: ceph-mon 44 operator: in 45 values: 46 - enabled 47 tolerations: 48 - key: ceph-mon 49 operator: exists 50 ods: 51 nodeaffinity: 52 requiredduringschedulingignoredduringexecution: 53 nodeselectorterms: 54 - matchexpressions: 55 - key: ceph-osd 56 operator: in 57 values: 58 - enabled 59 tolerations: 60 - key: ceph-osd 61 operator: exists 62 mgr: 63 nodeaffinity: 64 requiredduringschedulingignoredduringexecution: 65 nodeselectorterms: 66 - matchexpressions: 67 - key: ceph-mgr 68 operator: in 69 values: 70 - enabled 71 tolerations: 72 - key: ceph-mgr 73 operator: exists 74 annotations: 75 resources: 76 removeosdsifoutandsafetoremove: false 77 storage: 78 useallnodes: false #关闭使用所有node 79 usealldevices: false #关闭使用所有设备 80 devicefilter: sdb 81 config: 82 metadatadevice: 83 databasesizemb: "1024" 84 journalsizemb: "1024" 85 nodes: 86 - name: "k8snode01" #指定存储节点主机 87 config: 88 storetype: bluestore #指定类型为裸磁盘 89 devices: 90 - name: "sdb" #指定磁盘为sdb 91 - name: "k8snode02" 92 config: 93 storetype: bluestore 94 devices: 95 - name: "sdb" 96 - name: "k8snode03" 97 config: 98 storetype: bluestore 99 devices: 100 - name: "sdb" 101 disruptionmanagement: 102 managepodbudgets: false 103 osdmaintenancetimeout: 30 104 managemachinedisruptionbudgets: false 105 machinedisruptionbudgetnamespace: openshift-machine-api
提示:更多cluster的crd配置参考:https://github.com/rook/rook/blob/master/documentation/ceph-cluster-crd.md。
https://blog.gmem.cc/rook-based-k8s-storage-solution
2.5 获取镜像
可能由于国内环境无法pull镜像,建议提前pull如下镜像:
docker pull rook/ceph:master
docker pull quay.azk8s.cn/cephcsi/cephcsi:v1.2.2
docker pull quay.azk8s.cn/k8scsi/csi-node-driver-registrar:v1.1.0
docker pull quay.azk8s.cn/k8scsi/csi-provisioner:v1.4.0
docker pull quay.azk8s.cn/k8scsi/csi-attacher:v1.2.0
docker pull quay.azk8s.cn/k8scsi/csi-snapshotter:v1.2.2
docker tag quay.azk8s.cn/cephcsi/cephcsi:v1.2.2 quay.io/cephcsi/cephcsi:v1.2.2
docker tag quay.azk8s.cn/k8scsi/csi-node-driver-registrar:v1.1.0 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
docker tag quay.azk8s.cn/k8scsi/csi-provisioner:v1.4.0 quay.io/k8scsi/csi-provisioner:v1.4.0
docker tag quay.azk8s.cn/k8scsi/csi-attacher:v1.2.0 quay.io/k8scsi/csi-attacher:v1.2.0
docker tag quay.azk8s.cn/k8scsi/csi-snapshotter:v1.2.2 quay.io/k8scsi/csi-snapshotter:v1.2.2
docker rmi quay.azk8s.cn/cephcsi/cephcsi:v1.2.2
docker rmi quay.azk8s.cn/k8scsi/csi-node-driver-registrar:v1.1.0
docker rmi quay.azk8s.cn/k8scsi/csi-provisioner:v1.4.0
docker rmi quay.azk8s.cn/k8scsi/csi-attacher:v1.2.0
docker rmi quay.azk8s.cn/k8scsi/csi-snapshotter:v1.2.2
2.6 部署cluster
[root@k8smaster01 ceph]# kubectl create -f cluster.yaml
[root@k8smaster01 ceph]# kubectl logs -f -n rook-ceph rook-ceph-operator-cb47c46bc-pszfh #可查看部署log
[root@k8smaster01 ceph]# kubectl get pods -n rook-ceph -o wide #需要等待一定时间,部分中间态容器可能会波动
提示:若部署失败,master节点执行[root@k8smaster01 ceph]# kubectl delete -f ./
所有node节点执行如下清理操作:
rm -rf /var/lib/rook
/dev/mapper/ceph-*
dmsetup ls
dmsetup remove_all
dd if=/dev/zero of=/dev/sdb bs=512k count=1
wipefs -af /dev/sdb
2.7 部署toolbox
toolbox是一个rook的工具集容器,该容器中的命令可以用来调试、测试rook,对ceph临时测试的操作一般在这个容器内执行。
[root@k8smaster01 ceph]# kubectl create -f toolbox.yaml
[root@k8smaster01 ceph]# kubectl -n rook-ceph get pod -l "app=rook-ceph-tools"
name ready status restarts age
rook-ceph-tools-59b8cccb95-9rl5l 1/1 running 0 15s
2.8 测试rook
[root@k8smaster01 ceph]# kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
[root@rook-ceph-tools-59b8cccb95-9rl5l /]# ceph status #查看ceph状态
[root@rook-ceph-tools-59b8cccb95-9rl5l /]# ceph osd status
[root@rook-ceph-tools-59b8cccb95-9rl5l /]# ceph df
[root@rook-ceph-tools-59b8cccb95-9rl5l /]# rados df
[root@rook-ceph-tools-59b8cccb95-9rl5l /]# ceph auth ls #查看ceph所有keyring
[root@rook-ceph-tools-59b8cccb95-9rl5l /]# ceph version
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus (stable)
提示:更多ceph管理参考《008.rhcs-管理ceph存储集群》,如上工具中也支持使用独立的ceph命令ceph osd pool create ceph-test 512创建相关pool,实际kubernetes rook中,不建议直接操作底层ceph,以防止上层kubernetes而言数据不一致性。
2.10 复制key和config
为方便管理,可将ceph的keyring和config在master节点也创建一份,从而实现在kubernetes外部宿主机对rook ceph集群的简单查看。
[root@k8smaster01 ~]# kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') cat /etc/ceph/ceph.conf > /etc/ceph/ceph.conf
[root@k8smaster01 ~]# kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') cat /etc/ceph/keyring > /etc/ceph/keyring
[root@k8smaster01 ceph]# tee /etc/yum.repos.d/ceph.repo <<-'eof'
[ceph]
name=ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-noarch]
name=ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/srpms
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
eof
[root@k8smaster01 ceph]# yum -y install ceph-common ceph-fuse #安装客户端
[root@k8smaster01 ~]# ceph status
提示:rpm-nautilus版本建议和2.8所查看的版本一致。基于kubernetes的rook ceph集群,强烈不建议直接使用ceph命令进行管理,否则可能出现非一致性,对于rook集群的使用参考步骤三,ceph命令仅限于简单的集群查看。
三 ceph 块存储
3.1 创建storageclass
在提供(provisioning)块存储之前,需要先创建storageclass和存储池。k8s需要这两类资源,才能和rook交互,进而分配持久卷(pv)。
[root@k8smaster01 ceph]# kubectl create -f csi/rbd/storageclass.yaml
解读:如下配置文件中会创建一个名为replicapool的存储池,和rook-ceph-block的storageclass。
1 apiversion: ceph.rook.io/v1 2 kind: cephblockpool 3 metadata: 4 name: replicapool 5 namespace: rook-ceph 6 spec: 7 failuredomain: host 8 replicated: 9 size: 3 10 --- 11 apiversion: storage.k8s.io/v1 12 kind: storageclass 13 metadata: 14 name: rook-ceph-block 15 provisioner: rook-ceph.rbd.csi.ceph.com 16 parameters: 17 clusterid: rook-ceph 18 pool: replicapool 19 imageformat: "2" 20 imagefeatures: layering 21 csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner 22 csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph 23 csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node 24 csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph 25 csi.storage.k8s.io/fstype: ext4 26 reclaimpolicy: delete
[root@k8smaster01 ceph]# kubectl get storageclasses.storage.k8s.io
name provisioner age
rook-ceph-block rook-ceph.rbd.csi.ceph.com 8m44s
3.2 创建pvc
[root@k8smaster01 ceph]# kubectl create -f csi/rbd/pvc.yaml
1 apiversion: v1 2 kind: persistentvolumeclaim 3 metadata: 4 name: block-pvc 5 spec: 6 storageclassname: rook-ceph-block 7 accessmodes: 8 - readwriteonce 9 resources: 10 requests: 11 storage: 200mi
[root@k8smaster01 ceph]# kubectl get pvc
[root@k8smaster01 ceph]# kubectl get pv
解读:如上创建相应的pvc,storageclassname:为基于rook ceph集群的rook-ceph-block。
3.3 消费块设备
[root@k8smaster01 ceph]# vi rookpod01.yaml
1 apiversion: v1 2 kind: pod 3 metadata: 4 name: rookpod01 5 spec: 6 restartpolicy: onfailure 7 containers: 8 - name: test-container 9 image: busybox 10 volumemounts: 11 - name: block-pvc 12 mountpath: /var/test 13 command: ['sh', '-c', 'echo "hello world" > /var/test/data; exit 0'] 14 volumes: 15 - name: block-pvc 16 persistentvolumeclaim: 17 claimname: block-pvc
[root@k8smaster01 ceph]# kubectl create -f rookpod01.yaml
[root@k8smaster01 ceph]# kubectl get pod
name ready status restarts age
rookpod01 0/1 completed 0 5m35s
解读:创建如上pod,并挂载3.2所创建的pvc,等待执行完毕。
3.4 测试持久性
[root@k8smaster01 ceph]# kubectl delete pods rookpod01 #删除rookpod01
[root@k8smaster01 ceph]# vi rookpod02.yaml
1 apiversion: v1 2 kind: pod 3 metadata: 4 name: rookpod02 5 spec: 6 restartpolicy: onfailure 7 containers: 8 - name: test-container 9 image: busybox 10 volumemounts: 11 - name: block-pvc 12 mountpath: /var/test 13 command: ['sh', '-c', 'cat /var/test/data; exit 0'] 14 volumes: 15 - name: block-pvc 16 persistentvolumeclaim: 17 claimname: block-pvc
[root@k8smaster01 ceph]# kubectl create -f rookpod02.yaml
[root@k8smaster01 ceph]# kubectl logs rookpod02 test-container
hello world
解读:创建rookpod02,并使用所创建的pvc,测试持久性。
提示:更多ceph块设备知识参考《003.rhcs-rbd块存储使用》。
四 ceph 对象存储
4.1 创建cephobjectstore
在提供(object)对象存储之前,需要先创建相应的支持,使用如下官方提供的默认yaml可部署对象存储的cephobjectstore。
[root@k8smaster01 ceph]# kubectl create -f object.yaml
1 apiversion: ceph.rook.io/v1 2 kind: cephobjectstore 3 metadata: 4 name: my-store 5 namespace: rook-ceph 6 spec: 7 metadatapool: 8 failuredomain: host 9 replicated: 10 size: 3 11 datapool: 12 failuredomain: host 13 replicated: 14 size: 3 15 preservepoolsondelete: false 16 gateway: 17 type: s3 18 sslcertificateref: 19 port: 80 20 secureport: 21 instances: 1 22 placement: 23 annotations: 24 resources:
[root@k8smaster01 ceph]# kubectl -n rook-ceph get pod -l app=rook-ceph-rgw #部署完成会创建rgw的pod
name ready status restarts age
rook-ceph-rgw-my-store-a-6bd6c797c4-7dzjr 1/1 running 0 19s
4.2 创建storageclass
使用如下官方提供的默认yaml可部署对象存储的storageclass。
[root@k8smaster01 ceph]# kubectl create -f storageclass-bucket-delete.yaml
1 apiversion: storage.k8s.io/v1 2 kind: storageclass 3 metadata: 4 name: rook-ceph-delete-bucket 5 provisioner: ceph.rook.io/bucket 6 reclaimpolicy: delete 7 parameters: 8 objectstorename: my-store 9 objectstorenamespace: rook-ceph 10 region: us-east-1
[root@k8smaster01 ceph]# kubectl get sc
4.3 创建bucket
使用如下官方提供的默认yaml可部署对象存储的bucket。
[root@k8smaster01 ceph]# kubectl create -f object-bucket-claim-delete.yaml
1 apiversion: objectbucket.io/v1alpha1 2 kind: objectbucketclaim 3 metadata: 4 name: ceph-delete-bucket 5 spec: 6 generatebucketname: ceph-bkt 7 storageclassname: rook-ceph-delete-bucket
4.4 设置对象存储访问信息
[root@k8smaster01 ceph]# kubectl -n default get cm ceph-delete-bucket -o yaml | grep bucket_host | awk '{print $2}'
rook-ceph-rgw-my-store.rook-ceph
[root@k8smaster01 ceph]# kubectl -n rook-ceph get svc rook-ceph-rgw-my-store
name type cluster-ip external-ip port(s) age
rook-ceph-rgw-my-store clusterip 10.102.165.187 <none> 80/tcp 7m34s
[root@k8smaster01 ceph]# export aws_host=$(kubectl -n default get cm ceph-delete-bucket -o yaml | grep bucket_host | awk '{print $2}')
[root@k8smaster01 ceph]# export aws_access_key_id=$(kubectl -n default get secret ceph-delete-bucket -o yaml | grep aws_access_key_id | awk '{print $2}' | base64 --decode)
[root@k8smaster01 ceph]# export aws_secret_access_key=$(kubectl -n default get secret ceph-delete-bucket -o yaml | grep aws_secret_access_key | awk '{print $2}' | base64 --decode)
[root@k8smaster01 ceph]# export aws_endpoint='10.102.165.187'
[root@k8smaster01 ceph]# echo '10.102.165.187 rook-ceph-rgw-my-store.rook-ceph' >> /etc/hosts
4.5 测试访问
[root@k8smaster01 ceph]# radosgw-admin bucket list #查看bucket
[root@k8smaster01 ceph]# yum --assumeyes install s3cmd #安装s3客户端
[root@k8smaster01 ceph]# echo "hello rook" > /tmp/rookobj #创建测试文件
[root@k8smaster01 ceph]# s3cmd put /tmp/rookobj --no-ssl --host=${aws_host} --host-bucket= s3://ceph-bkt-377bf96f-aea8-4838-82bc-2cb2c16cccfb/test.txt #测试上传至bucket
提示:更多rook 对象存储使用,如创建用户等参考:https://rook.io/docs/rook/v1.1/ceph-object.html。
五 ceph 文件存储
5.1 创建cephfilesystem
默认ceph未部署对cephfs的支持,使用如下官方提供的默认yaml可部署文件存储的filesystem。
[root@k8smaster01 ceph]# kubectl create -f filesystem.yaml
1 apiversion: ceph.rook.io/v1 2 kind: cephfilesystem 3 metadata: 4 name: myfs 5 namespace: rook-ceph 6 spec: 7 metadatapool: 8 replicated: 9 size: 3 10 datapools: 11 - failuredomain: host 12 replicated: 13 size: 3 14 preservepoolsondelete: true 15 metadataserver: 16 activecount: 1 17 activestandby: true 18 placement: 19 podantiaffinity: 20 requiredduringschedulingignoredduringexecution: 21 - labelselector: 22 matchexpressions: 23 - key: app 24 operator: in 25 values: 26 - rook-ceph-mds 27 topologykey: kubernetes.io/hostname 28 annotations: 29 resources:
[root@k8smaster01 ceph]# kubectl get cephfilesystems.ceph.rook.io -n rook-ceph
name activemds age
myfs 1 27s
5.2 创建storageclass
[root@k8smaster01 ceph]# kubectl create -f csi/cephfs/storageclass.yaml
使用如下官方提供的默认yaml可部署文件存储的storageclass。
[root@k8smaster01 ceph]# vi csi/cephfs/storageclass.yaml
1 apiversion: storage.k8s.io/v1 2 kind: storageclass 3 metadata: 4 name: csi-cephfs 5 provisioner: rook-ceph.cephfs.csi.ceph.com 6 parameters: 7 clusterid: rook-ceph 8 fsname: myfs 9 pool: myfs-data0 10 csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner 11 csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph 12 csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node 13 csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph 14 reclaimpolicy: delete 15 mountoptions:
[root@k8smaster01 ceph]# kubectl get sc
name provisioner age
csi-cephfs rook-ceph.cephfs.csi.ceph.com 10m
5.3 创建pvc
[root@k8smaster01 ceph]# vi rookpvc03.yaml
1 apiversion: v1 2 kind: persistentvolumeclaim 3 metadata: 4 name: cephfs-pvc 5 spec: 6 storageclassname: csi-cephfs 7 accessmodes: 8 - readwriteonce 9 resources: 10 requests: 11 storage: 200mi
[root@k8smaster01 ceph]# kubectl create -f rookpvc03.yaml
[root@k8smaster01 ceph]# kubectl get pv
[root@k8smaster01 ceph]# kubectl get pvc
5.4 消费pvc
[root@k8smaster01 ceph]# vi rookpod03.yaml
1 --- 2 apiversion: v1 3 kind: pod 4 metadata: 5 name: csicephfs-demo-pod 6 spec: 7 containers: 8 - name: web-server 9 image: nginx 10 volumemounts: 11 - name: mypvc 12 mountpath: /var/lib/www/html 13 volumes: 14 - name: mypvc 15 persistentvolumeclaim: 16 claimname: cephfs-pvc 17 readonly: false
[root@k8smaster01 ceph]# kubectl create -f rookpod03.yaml
[root@k8smaster01 ceph]# kubectl get pods
name ready status restarts age
csicephfs-demo-pod 1/1 running 0 24s
六 设置dashboard
6.1 部署node svc
步骤2.4已创建dashboard,但仅使用clusterip暴露服务,使用如下官方提供的默认yaml可部署外部nodeport方式暴露服务的dashboard。
[root@k8smaster01 ceph]# kubectl create -f dashboard-external-https.yaml
[root@k8smaster01 ceph]# vi dashboard-external-https.yaml
1 apiversion: v1 2 kind: service 3 metadata: 4 name: rook-ceph-mgr-dashboard-external-https 5 namespace: rook-ceph 6 labels: 7 app: rook-ceph-mgr 8 rook_cluster: rook-ceph 9 spec: 10 ports: 11 - name: dashboard 12 port: 8443 13 protocol: tcp 14 targetport: 8443 15 selector: 16 app: rook-ceph-mgr 17 rook_cluster: rook-ceph 18 sessionaffinity: none 19 type: nodeport
[root@k8smaster01 ceph]# kubectl get svc -n rook-ceph
6.2 确认验证
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath='{.data.password}' | base64 --decode #获取初始密码
浏览器访问:https://172.24.8.71:31097
账号:admin,密码:如上查找即可。
七 集群管理
7.1 修改配置
默认创建ceph集群的配置参数在创建cluster的时候生成ceph集群的配置参数,若需要在部署完成后修改相应参数,可通过如下操作试下:
[root@k8smaster01 ceph]# kubectl -n rook-ceph get configmap rook-config-override -o yaml #获取参数
[root@k8snode02 ~]# cat /var/lib/rook/rook-ceph/rook-ceph.config #也可在任何node上查看
[root@k8smaster01 ceph]# kubectl -n rook-ceph edit configmap rook-config-override -o yaml #修改参数
1 …… 2 apiversion: v1 3 data: 4 config: | 5 [global] 6 osd pool default size = 2 7 ……
依次重启ceph组件
[root@k8smaster01 ceph]# kubectl -n rook-ceph delete pod rook-ceph-mgr-a-5699bb7984-kpxgp
[root@k8smaster01 ceph]# kubectl -n rook-ceph delete pod rook-ceph-mon-a-85698dfff9-w5l8c
[root@k8smaster01 ceph]# kubectl -n rook-ceph delete pod rook-ceph-mgr-a-d58847d5-dj62p
[root@k8smaster01 ceph]# kubectl -n rook-ceph delete pod rook-ceph-mon-b-76559bf966-652nl
[root@k8smaster01 ceph]# kubectl -n rook-ceph delete pod rook-ceph-mon-c-74dd86589d-s84cz
注意:ceph-mon, ceph-osd的delete最后是one-by-one的,等待ceph集群状态为health_ok后再delete另一个。
提示:其他更多rook配置参数参考:https://rook.io/docs/rook/v1.1/。
7.2 创建pool
对rook ceph集群的pool创建,建议采用kubernetes的方式,而不建议使用toolbox中的ceph命令。
使用如下官方提供的默认yaml可部署pool。
[root@k8smaster01 ceph]# kubectl create -f pool.yaml
1 apiversion: ceph.rook.io/v1 2 kind: cephblockpool 3 metadata: 4 name: replicapool2 5 namespace: rook-ceph 6 spec: 7 failuredomain: host 8 replicated: 9 size: 3 10 annotations:
7.3 删除pool
[root@k8smaster01 ceph]# kubectl delete -f pool.yaml
提示:更多pool管理,如纠删码池参考:https://rook.io/docs/rook/v1.1/ceph-pool-crd.html。
7.4 添加osd节点
本步骤模拟将k8smaster的sdb添加为osd。
[root@k8smaster01 ceph]# kubectl taint node k8smaster01 node-role.kubernetes.io/master- #允许调度pod
[root@k8smaster01 ceph]# kubectl label nodes k8smaster01 ceph-osd=enabled #设置标签
[root@k8smaster01 ceph]# vi cluster.yaml #追加master01的配置
……
- name: "k8smaster01"
config:
storetype: bluestore
devices:
- name: "sdb"
……
[root@k8smaster01 ceph]# kubectl apply -f cluster.yaml
[root@k8smaster01 ceph]# kubectl -n rook-ceph get pod -o wide -w
ceph osd tree
7.5 删除osd节点
[root@k8smaster01 ceph]# kubectl label nodes k8smaster01 ceph-osd- #删除标签
[root@k8smaster01 ceph]# vi cluster.yaml #删除如下master01的配置
1 …… 2 - name: "k8smaster01" 3 config: 4 storetype: bluestore 5 devices: 6 - name: "sdb" 7 ……
[root@k8smaster01 ceph]# kubectl apply -f cluster.yaml
[root@k8smaster01 ceph]# kubectl -n rook-ceph get pod -o wide -w
[root@k8smaster01 ceph]# rm -rf /var/lib/rook
7.6 删除cluster
完整优雅删除rook集群的方式参考:https://github.com/rook/rook/blob/master/documentation/ceph-teardown.md
7.7 升级rook
参考:http://www.yangguanjun.com/2018/12/28/rook-ceph-practice-part2/。
更多官网文档参考:https://rook.github.io/docs/rook/v1.1/
推荐博文:http://www.yangguanjun.com/archives/
https://sealyun.com/post/rook/
上一篇: 如何建立架构师的立体化思维?【1】
下一篇: 我会觉得对不起自己