欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

kubenetes集群模式部署minio

程序员文章站 2024-03-17 20:17:46
...

环境准备

一个部署完整的k8s集群,版本1.18.1
系统版本:CentOS7.2
docker版本:1.13.1

172.22.21.77 dev-learn-77 master
172.22.21.78 dev-learn-78 slave
172.22.21.79 dev-learn-79 slave

[aaa@qq.com ~]# kubectl get node -o wide
NAME           STATUS   ROLES    AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
dev-learn-77   Ready    master   5d5h   v1.18.1   172.22.21.77   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://1.13.1
dev-learn-78   Ready    <none>   5d     v1.18.1   172.22.21.78   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://1.13.1
dev-learn-79   Ready    <none>   5d1h   v1.18.1   172.22.21.79   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://1.13.1
[aaa@qq.com ~]# 

使用的是主机Host网络
存储使用本地文件系统

准备yaml文件

  • minio-distributed-daemonset.yaml
    注意环境变量中的MINIO_ACCESS_KEY和MINIO_SECRET_KEY,将会是登录时的用户名和密码,可以任意更改。
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: minio
  labels:
    app: minio
spec:
  selector:
    matchLabels:
      app: minio
  template:
    metadata:
      labels:
        app: minio
    spec:
      # We only deploy minio to the specified nodes. select your nodes by using `kubectl label node hostname1 -l minio-server=true`
      nodeSelector:
        minio-server: "true"
      # This is to maximize network performance, the headless service can be used to connect to a random host.
      hostNetwork: true
      # We're just using a hostpath. This path must be the same on all servers, and should be the largest, fastest block device you can fit.
      volumes:
      - name: storage
        hostPath:
          path: /mounts/minio1
      containers:
      - name: minio
        env:
        - name: MINIO_ACCESS_KEY
          value: "minio"
        - name: MINIO_SECRET_KEY
          value: "minio123"
        image: minio/minio:RELEASE.2020-04-04T05-39-31Z
        # Unfortunately you must manually define each server. Perhaps autodiscovery via DNS can be implemented in the future.
        args:
        - server
        - http://dev-learn-7{7...9}/mnt/disk{1...2}/minio/minio1/data
        ports:
        - containerPort: 9000
        volumeMounts:
        - name: storage
          mountPath: /mounts/minio1/
  • minio-distributed-headless-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: minio
  labels:
    app: minio
spec:
  publishNotReadyAddresses: true
  clusterIP: None
  ports:
    - port: 9000
      name: minio
  selector:
    app: minio

注意:集群模式的minio至少需要四块硬盘,否则是无法启动的,所以,这里:
- http://dev-learn-7{7...9}/mnt/disk{1...3}/minio/minio1/data
总数必须超过4个, 我这里是6块硬盘。

磁盘挂载方式我选择将/mnt/disk1-2/minio/minio1/data/以bind的方式挂载到/mounts/minio1/mnt/disk1-2/minio/minio1/data/下,从而方便将主机的文件系统映射到container里。每个节点都执行

[aaa@qq.com ~]# mkdir -p  /mounts/minio1/mnt/disk1/minio/minio1/data 
[aaa@qq.com ~]# mkdir -p  /mounts/minio1/mnt/disk2/minio/minio1/data 
[aaa@qq.com ~]# 
[aaa@qq.com ~]# mkdir -p  /mnt/disk1/minio/minio1/data/
[aaa@qq.com ~]# mkdir -p  /mnt/disk2/minio/minio1/data/
[aaa@qq.com ~]# 
[aaa@qq.com ~]# mount --bind /mnt/disk1/minio/minio1/data/ /mounts/minio1/mnt/disk1/minio/minio1/data/
[aaa@qq.com ~]# mount --bind /mnt/disk2/minio/minio1/data/ /mounts/minio1/mnt/disk2/minio/minio1/data/
[aaa@qq.com ~]# 
[aaa@qq.com ~]# echo "/mnt/disk1/minio/minio1/data    /mounts/minio1/mnt/disk1/minio/minio1/data    none    bind    0    0">>/etc/fstab
[aaa@qq.com ~]# echo "/mnt/disk2/minio/minio1/data    /mounts/minio1/mnt/disk2/minio/minio1/data    none    bind    0    0">>/etc/fstab

然后将挂载信息,写入/etc/fstab中

echo "/mnt/disk1/minio/minio1/data    /mounts/minio1/mnt/disk1/minio/minio1/data    none    bind    0    0">>/etc/fstab
echo "/mnt/disk2/minio/minio1/data    /mounts/minio1/mnt/disk2/minio/minio1/data    none    bind    0    0">>/etc/fstab

创建minio集群

[root@dev-learn-77 minio]# kubectl label node dev-learn-77 minio-server=true
node/dev-learn-77 labeled
[root@dev-learn-77 minio]# kubectl label node dev-learn-78 minio-server=true
node/dev-learn-78 labeled
[root@dev-learn-77 minio]# kubectl label node dev-learn-79 minio-server=true
node/dev-learn-79 labeled

[aaa@qq.com minio]# kubectl create -f minio-distributed-headless-service.yaml 
service/minio created
[aaa@qq.com minio]# kubectl create -f minio-distributed-daemonset.yaml 
daemonset.apps/minio created
[aaa@qq.com minio]# kubectl get service
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP    5d5h
minio        ClusterIP   None         <none>        9000/TCP   43s
[aaa@qq.com minio]# kubectl get daemonset
NAME    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR       AGE
minio   2         2         1       2            1           minio-server=true   44s
[aaa@qq.com minio]# 
[aaa@qq.com minio]# kubectl get pod -o wide
NAME          READY   STATUS    RESTARTS   AGE    IP             NODE           NOMINATED NODE   READINESS GATES
minio-2cgbg   1/1     Running   0          109s   172.22.21.79   dev-learn-79   <none>           <none>
minio-cxdzl   1/1     Running   0          109s   172.22.21.78   dev-learn-78   <none>           <none>
[aaa@qq.com minio]# 

但是发现只有78和79参与了调度,77master节点并没有pod的启动。
因为出于安全考虑,k8s默认禁止master参与调度普通pod
执行如下命令解除限制:

[aaa@qq.com minio]# kubectl taint node dev-learn-77 node-role.kubernetes.io/master-
node/dev-learn-77 untainted

然后就可以正常调度三个pod,作为集群模式

[aaa@qq.com minio]# kubectl get pod -o wide
NAME          READY   STATUS    RESTARTS   AGE     IP             NODE           NOMINATED NODE   READINESS GATES
minio-2cgbg   1/1     Running   0          11m     172.22.21.79   dev-learn-79   <none>           <none>
minio-5jzql   1/1     Running   0          6m45s   172.22.21.77   dev-learn-77   <none>           <none>
minio-cxdzl   1/1     Running   0          11m     172.22.21.78   dev-learn-78   <none>           <none>
[aaa@qq.com minio]# 

然后浏览器输入三个节点中任意一个ip:9000,即可访问:
用户名密码是daemonset中设置的minio/minio123
kubenetes集群模式部署minio

之所以没用kubenetes的service,是因为环境比较简陋,没有DNS,所以即使使用service,也只能在集群内部使用,外面的网络是无法访问的,所以出次下册。

另一方面存储使用的是主机文件系统,当然也可以使用PV/PVC的方式,更好不过了。

因为没有使用service,所以负载均衡和高可用,需要另作打算,将在下一篇文章中讲述。

相关标签: 大数据