欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

MinIO分布式对象存储

程序员文章站 2022-04-18 12:45:45
...

MinIO 分布式部署需要4块磁盘,可以是4台服务器分别挂载一块磁盘,也可以是2台服务器分别挂载两块磁盘,可参考:https://docs.min.io/cn/deploy-minio-on-docker-swarm.html

MinIO 分布式部署支持Docker Compose、Docker Swarm、Kubernetes等几种方式,Docker-Compose是单机多实例,伪分布式,可测试使用,因没有kubernetes环境,这里使用Docker Swarm

有3台服务器,一台docker swarm manger ,两台worker

主机IP 角色
172.16.0.5 manager
172.16.0.2 worker
172.16.0.8 woker
  • 1.在 172.16.0.5 机器上使用:docker swarm init --advertise-addr 172.16.0.5
  • 2.一旦swarm初使化了,你可以看到下面的响应信息,复制这段信息,在两台worker节点上执行
docker swarm join \
  --token  SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
  172.16.0.5:2377
  • 3.在manger节点上
    查看节点信息: docker node ls
    添加节点标签: docker node update --label-add minio1=true (ID) ,该ID为docker node ls 查询出来的worker节点ID
 docker node update --label-add minio1=true lzqc0e1lojlq64mtzhurkqojc
 docker node update --label-add minio2=true lzqc0e1lojlq64mtzhurkqojc
 docker node update --label-add minio3=true 9kyb2ztxmrk4uwyor3xxjysb6
 docker node update --label-add minio4=true 9kyb2ztxmrk4uwyor3xxjysb6

MinIO分布式对象存储

  • 4.为MinIO创建Docker secret
echo "AKIAIOSFODNN7EXAMPLE" | docker secret create access_key -
echo "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" | docker secret create secret_key -
  • 5.下载docker-compose-secrets.yaml 部署文件
wget https://raw.githubusercontent.com/minio/minio/master/docs/orchestration/docker-swarm/docker-compose-secrets.yaml
  • 6.修改部署文件里面的内容(基本不用改什么,可以修改volumes,挂载宿主机指定磁盘目录)
version: '3.7'

services:
  minio1:
    image: minio/minio:RELEASE.2020-10-28T08-16-50Z
    hostname: minio1
    volumes:
      - minio1-data:/export
    ports:
      - "9001:9000"
    networks:
      - minio_distributed
    deploy:
      restart_policy:
        delay: 10s
        max_attempts: 10
        window: 60s
      placement:
        constraints:
          - node.labels.minio1==true
    command: server http://minio{1...4}/export
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio2:
    image: minio/minio:RELEASE.2020-10-28T08-16-50Z
    hostname: minio2
    volumes:
      - minio2-data:/export
    ports:
      - "9002:9000"
    networks:
      - minio_distributed
    deploy:
      restart_policy:
        delay: 10s
        max_attempts: 10
        window: 60s
      placement:
        constraints:
          - node.labels.minio2==true
    command: server http://minio{1...4}/export
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio3:
    image: minio/minio:RELEASE.2020-10-28T08-16-50Z
    hostname: minio3
    volumes:
      - minio3-data:/export
    ports:
      - "9003:9000"
    networks:
      - minio_distributed
    deploy:
      restart_policy:
        delay: 10s
        max_attempts: 10
        window: 60s
      placement:
        constraints:
          - node.labels.minio3==true
    command: server http://minio{1...4}/export
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio4:
    image: minio/minio:RELEASE.2020-10-28T08-16-50Z
    hostname: minio4
    volumes:
      - minio4-data:/export
    ports:
      - "9004:9000"
    networks:
      - minio_distributed
    deploy:
      restart_policy:
        delay: 10s
        max_attempts: 10
        window: 60s
      placement:
        constraints:
          - node.labels.minio4==true
    command: server http://minio{1...4}/export
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

volumes:
  minio1-data:
  minio2-data:
  minio3-data:
  minio4-data:

networks:
  minio_distributed:
    driver: overlay

secrets:
  secret_key:
    external: true
  access_key:
    external: true
  • 7.启动集群:docker stack deploy --compose-file=./docker-compose-secrets.yaml minio_stack
    使用docker service ls 查看4个minio服务
    MinIO分布式对象存储
    在两台worker节点上执行docker ps,可以看到每台worker服务器分别启动了2个minio实例
    MinIO分布式对象存储
    在两台worker节点上执行docker volume ls ,查看minio容器把文件存放在什么位置
    docker volume inspect minio_stack_minio1-data
    MinIO分布式对象存储
  • 8.使用Nginx对4个MinIO服务实例进行代理
    安装nginx请参考:https://zhuyu.blog.csdn.net/article/details/103696133
    nginx.conf配置文件内容如下:
    upstream minio-cluster {
       server 172.16.0.2:9001;
       server 172.16.0.2:9002;
       server 172.16.0.8:9003;
       server 172.16.0.8:9004;  
    }
    server {
        listen       9000;
        server_name  172.16.0.5;
        location / {
            proxy_pass http://minio-cluster;
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            proxy_connect_timeout 90;
            proxy_send_timeout 90;
            proxy_buffer_size 4k;
            proxy_buffers 4 32k;
            client_max_body_size 10m;
            client_body_buffer_size 256k;
            #请注意加上下面2个配置参数
            proxy_buffering off;
            proxy_redirect off;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }

访问 172.16.0.5:9000 就可以访问 MinIO服务操作文件了,如下创建了名称为test的桶,创建目录,上传了文件,在MinIO worker服务器的数据存放目录查看是否有文件
MinIO分布式对象存储

可以看到在4个minio实例的每个volume目录下都能看到如下图的信息,每个实例(每个磁盘)都全量存储数据,这样实现了高可用,但是未实现分布式大数据量存储呀(可能是我配置有误,如有网友知道请留言)
MinIO分布式对象存储

  • 9.删除分布式MinIO services
#删除集群
docker stack rm minio_stack

#删除volume
docker volume ls
docker volume rm volume_name 
  • 10.扩展集群也比较方便,请查看官网:
启动分布式Minio实例,4节点,每节点4块盘,需要在4个节点上都运行下面的命令
minio server http://192.168.1.11/export1 http://192.168.1.11/export2 \
               http://192.168.1.11/export3 http://192.168.1.11/export4 \
               http://192.168.1.12/export1 http://192.168.1.12/export2 \
               http://192.168.1.12/export3 http://192.168.1.12/export4 \
               http://192.168.1.13/export1 http://192.168.1.13/export2 \
               http://192.168.1.13/export3 http://192.168.1.13/export4 \
               http://192.168.1.14/export1 http://192.168.1.14/export2 \
               http://192.168.1.14/export3 http://192.168.1.14/export4
MinIO支持通过命令,指定新的集群来扩展现有集群(纠删码模式),命令行如下:
minio server http://host{1...32}/export{1...32} http://host{33...64}/export{1...32}
  • 10.指定minio挂载磁盘目录,在两台worker节点上创建目录,如果挂载的是新磁盘请格式化等处理后mount到 根目录
    worker1节点:mkdir /minio1_data ,mkdir /minio2_data
    worker2节点:mkdir /minio3_data ,mkdir /minio3_data
version: '3.7'

services:
  minio1:
    image: minio/minio:RELEASE.2020-10-28T08-16-50Z
    hostname: minio1
    volumes:
      # 指定宿主机根目录下的minio1_data目录挂载映射到容器/export目录
      - /minio1_data:/export
    ports:
      - "9001:9000"
    networks:
      - minio_distributed
    deploy:
      restart_policy:
        delay: 10s
        max_attempts: 10
        window: 60s
      placement:
        constraints:
          - node.labels.minio1==true
    command: server http://minio{1...4}/export
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio2:
    image: minio/minio:RELEASE.2020-10-28T08-16-50Z
    hostname: minio2
    volumes:
      - /minio2_data:/export
    ports:
      - "9002:9000"
    networks:
      - minio_distributed
    deploy:
      restart_policy:
        delay: 10s
        max_attempts: 10
        window: 60s
      placement:
        constraints:
          - node.labels.minio2==true
    command: server http://minio{1...4}/export
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio3:
    image: minio/minio:RELEASE.2020-10-28T08-16-50Z
    hostname: minio3
    volumes:
      - /minio3_data:/export
    ports:
      - "9003:9000"
    networks:
      - minio_distributed
    deploy:
      restart_policy:
        delay: 10s
        max_attempts: 10
        window: 60s
      placement:
        constraints:
          - node.labels.minio3==true
    command: server http://minio{1...4}/export
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio4:
    image: minio/minio:RELEASE.2020-10-28T08-16-50Z
    hostname: minio4
    volumes:
      - /minio4_data:/export
    ports:
      - "9004:9000"
    networks:
      - minio_distributed
    deploy:
      restart_policy:
        delay: 10s
        max_attempts: 10
        window: 60s
      placement:
        constraints:
          - node.labels.minio4==true
    command: server http://minio{1...4}/export
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

networks:
  minio_distributed:
    driver: overlay

secrets:
  secret_key:
    external: true
  access_key:
    external: true