欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

关于CentOS 8 搭建MongoDB4.4分片集群的问题

程序员文章站 2022-06-10 08:02:13
目录一,简介1.分片2.为什么使用分片3.分片原理概述二,准备环境三,集群配置部署四,测试服务器分片功能一,简介1.分片在mongodb里面存在另一种集群,就是分片技术,可以满足mongodb数据量大...

一,简介

1.分片

在mongodb里面存在另一种集群,就是分片技术,可以满足mongodb数据量大量增长的需求。
在mongodb存储海量数据时,一台机器可能不足以存储数据,也可能不足以提供可接受的读写吞吐量。这时,我们就可以通过在多台机器上分割数据,使得数据库系统能存储和处理更多的数据。

2.为什么使用分片

  • 复制所有的写入操作到主节点
  • 延迟的敏感数据会在主节点查询
  • 单个副本集限制在12个节点
  • 当请求量巨大时会出现内存不足
  • 本地磁盘不足
  • 垂直扩展价格昂贵

3.分片原理概述

分片就是把数据分成块,再把块存储到不同的服务器上,mongodb的分片是自动分片的,当用户发送读写数据请求的时候,先经过mongos这个路由层,mongos路由层去配置服务器请求分片的信息,再来判断这个请求应该去哪一台服务器上读写数据。

关于CentOS 8 搭建MongoDB4.4分片集群的问题

二,准备环境

  • 操作系统:centos linux release 8.2.2004 (core)
  • mongodb版本:v4.4.10
  • ip:10.0.0.56 实例:mongos(30000) config(27017) shard1主节点(40001) shard2仲裁节点(40002) shard3副节点(40003)
  • ip:10.0.0.57 实例:mongos(30000) config(27017) shard1副节点(40001) shard2主节点(40002) shard3仲裁节点(40003)
  • ip:10.0.0.58 实例:mongos(30000) config(27017) shard1仲裁节点(40001) shard3副节点(40002) shard3主节点(40003)

三,集群配置部署

1.创建相应目录(三台服务器执行相同操作)

mkdir -p /mongo/{data,logs,apps,run}
mkdir -p /mongo/data/shard{1,2,3}
mkdir -p /mongo/data/config
mkdir -p /mongo/apps/conf

2.安装mongodb修改创建配置文件(三台执行相同操作)

安装可以通过下载mongodb安装包,再进行配置环境变量。这里是直接配置yum源,通过yum源安装的mongodb,后面直接执行mongod加所需配置文件路径运行即可。

(1)mongo-config配置文件

vim /mongo/apps/conf/mongo-config.yml

systemlog:
  destination: file
#日志路径
  path: "/mongo/logs/mongo-config.log"
  logappend: true
storage:
  journal:
    enabled: true
#数据存储路径
  dbpath: "/mongo/data/config"
  engine: wiredtiger
  wiredtiger:
    engineconfig:
         cachesizegb: 12
processmanagement:
  fork: true
  pidfilepath: "/mongo/run/mongo-config.pid"
net:
#这里ip可以设置为对应主机ip
  bindip: 0.0.0.0
#端口
  port: 27017
setparameter:
  enablelocalhostauthbypass: true
replication:
#复制集名称
  replsetname: "mgconfig"
sharding:
#作为配置服务
  clusterrole: configsvr

(2)mongo-shard1配置文件

vim /mongo/apps/conf/mongo-shard1.yml

systemlog:
  destination: file
  path: "/mongo/logs/mongo-shard1.log"
  logappend: true
storage:
  journal:
    enabled: true
  dbpath: "/mongo/data/shard1"
processmanagement:
  fork: true
  pidfilepath: "/mongo/run/mongo-shard1.pid"
net:
  bindip: 0.0.0.0
  #注意修改端口
  port: 40001
setparameter:
  enablelocalhostauthbypass: true
replication:
  #复制集名称
  replsetname: "shard1"
sharding:
  #作为分片服务
  clusterrole: shardsvr

(3)mongo-shard2配置文件

vim /mongo/apps/conf/mongo-shard2.yml

systemlog:
  destination: file
  path: "/mongo/logs/mongo-shard2.log"
  logappend: true
storage:
  journal:
    enabled: true
  dbpath: "/mongo/data/shard2"
processmanagement:
  fork: true
  pidfilepath: "/mongo/run/mongo-shard2.pid"
net:
  bindip: 0.0.0.0
  #注意修改端口
  port: 40002
setparameter:
  enablelocalhostauthbypass: true
replication:
  #复制集名称
  replsetname: "shard2"
sharding:
  #作为分片服务
  clusterrole: shardsvr

(4)mongo-shard3配置文件

vim /mongo/apps/conf/mongo-shard3.yml

systemlog:
  destination: file
  path: "/mongo/logs/mongo-shard3.log"
  logappend: true
storage:
  journal:
    enabled: true
  dbpath: "/mongo/data/shard3"
processmanagement:
  fork: true
  pidfilepath: "/mongo/run/mongo-shard3.pid"
net:
  bindip: 0.0.0.0
  #注意修改端口
  port: 40003
setparameter:
  enablelocalhostauthbypass: true
replication:
  #复制集名称
  replsetname: "shard3"
sharding:
  #作为分片服务
  clusterrole: shardsvr

(5)mongo-route配置文件

vim /mongo/apps/conf/mongo-route.yml

systemlog:
  destination: file
  #注意修改路径
  path: "/mongo/logs/mongo-route.log"
  logappend: true
processmanagement:
  fork: true
  pidfilepath: "/mongo/run/mongo-route.pid"
net:
  bindip: 0.0.0.0
  #注意修改端口
  port: 30000
setparameter:
  enablelocalhostauthbypass: true
replication:
  localpingthresholdms: 15
sharding:
  #关联配置服务
  configdb: mgconfig/10.0.0.56:27017,10.0.0.57:27017,10.0.0.58:27018

3.启动mongo-config服务(三台服务器执行相同操作)

#关闭之前yum安装的mongodb
systemctl stop mongod

cd /mongo/apps/conf/
mongod --config mongo-config.yml

#查看端口27017是否启动
netstat -ntpl
active internet connections (only servers)
proto recv-q send-q local address           foreign address         state       pid/program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               listen      1129/sshd
tcp        0      0 127.0.0.1:631           0.0.0.0:*               listen      1131/cupsd
tcp        0      0 127.0.0.1:6010          0.0.0.0:*               listen      2514/sshd: root@pts
tcp        0      0 127.0.0.1:6011          0.0.0.0:*               listen      4384/sshd: root@pts
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               listen      4905/mongod
tcp        0      0 0.0.0.0:111             0.0.0.0:*               listen      1/systemd
tcp6       0      0 :::22                   :::*                    listen      1129/sshd
tcp6       0      0 ::1:631                 :::*                    listen      1131/cupsd
tcp6       0      0 ::1:6010                :::*                    listen      2514/sshd: root@pts
tcp6       0      0 ::1:6011                :::*                    listen      4384/sshd: root@pts
tcp6       0      0 :::111                  :::*                    listen      1/systemd

4.连接一台实例,创建初始化复制集

#连接mongo
mongo 10.0.0.56:27017

#配置初始化复制集,这里的mgconfig要和配置文件里的replset的名称一致
config={_id:"mgconfig",members:[ 
  {_id:0,host:"10.0.0.56:27017"},
  {_id:1,host:"10.0.0.57:27017"},
  {_id:2,host:"10.0.0.58:27017"}, 
]}

rs.initiate(config)
#ok返回1便是初始化成功
{
	"ok" : 1,
	"$glestats" : {
		"lastoptime" : timestamp(1634710950, 1),
		"electionid" : objectid("000000000000000000000000")
	},
	"lastcommittedoptime" : timestamp(0, 0)
}

#检查状态
rs.status()

{
	"set" : "mgconfig",
	"date" : isodate("2021-10-20t06:24:24.277z"),
	"mystate" : 1,
	"term" : numberlong(1),
	"syncsourcehost" : "",
	"syncsourceid" : -1,
	"configsvr" : true,
	"heartbeatintervalmillis" : numberlong(2000),
	"majorityvotecount" : 2,
	"writemajoritycount" : 2,
	"votingmemberscount" : 3,
	"writablevotingmemberscount" : 3,
	"optimes" : {
		"lastcommittedoptime" : {
			"ts" : timestamp(1634711063, 1),
			"t" : numberlong(1)
		},
		"lastcommittedwalltime" : isodate("2021-10-20t06:24:23.811z"),
		"readconcernmajorityoptime" : {
			"ts" : timestamp(1634711063, 1),
			"t" : numberlong(1)
		},
		"readconcernmajoritywalltime" : isodate("2021-10-20t06:24:23.811z"),
		"appliedoptime" : {
			"ts" : timestamp(1634711063, 1),
			"t" : numberlong(1)
		},
		"durableoptime" : {
			"ts" : timestamp(1634711063, 1),
			"t" : numberlong(1)
		},
		"lastappliedwalltime" : isodate("2021-10-20t06:24:23.811z"),
		"lastdurablewalltime" : isodate("2021-10-20t06:24:23.811z")
	},
	"laststablerecoverytimestamp" : timestamp(1634711021, 1),
	"electioncandidatemetrics" : {
		"lastelectionreason" : "electiontimeout",
		"lastelectiondate" : isodate("2021-10-20t06:22:41.335z"),
		"electionterm" : numberlong(1),
		"lastcommittedoptimeatelection" : {
			"ts" : timestamp(0, 0),
			"t" : numberlong(-1)
		},
		"lastseenoptimeatelection" : {
			"ts" : timestamp(1634710950, 1),
			"t" : numberlong(-1)
		},
		"numvotesneeded" : 2,
		"priorityatelection" : 1,
		"electiontimeoutmillis" : numberlong(10000),
		"numcatchupops" : numberlong(0),
		"newtermstartdate" : isodate("2021-10-20t06:22:41.509z"),
		"wmajoritywriteavailabilitydate" : isodate("2021-10-20t06:22:42.322z")
	},
	"members" : [
		{
			"_id" : 0,
			"name" : "10.0.0.56:27017",
			"health" : 1,
			"state" : 1,
			"statestr" : "primary",
			"uptime" : 530,
			"optime" : {
				"ts" : timestamp(1634711063, 1),
				"t" : numberlong(1)
			},
			"optimedate" : isodate("2021-10-20t06:24:23z"),
			"syncsourcehost" : "",
			"syncsourceid" : -1,
			"infomessage" : "",
			"electiontime" : timestamp(1634710961, 1),
			"electiondate" : isodate("2021-10-20t06:22:41z"),
			"configversion" : 1,
			"configterm" : 1,
			"self" : true,
			"lastheartbeatmessage" : ""
		},
		{
			"_id" : 1,
			"name" : "10.0.0.57:27017",
			"health" : 1,
			"state" : 2,
			"statestr" : "secondary",
			"uptime" : 113,
			"optime" : {
				"ts" : timestamp(1634711061, 1),
				"t" : numberlong(1)
			},
			"optimedurable" : {
				"ts" : timestamp(1634711061, 1),
				"t" : numberlong(1)
			},
			"optimedate" : isodate("2021-10-20t06:24:21z"),
			"optimedurabledate" : isodate("2021-10-20t06:24:21z"),
			"lastheartbeat" : isodate("2021-10-20t06:24:22.487z"),
			"lastheartbeatrecv" : isodate("2021-10-20t06:24:22.906z"),
			"pingms" : numberlong(0),
			"lastheartbeatmessage" : "",
			"syncsourcehost" : "10.0.0.56:27017",
			"syncsourceid" : 0,
			"infomessage" : "",
			"configversion" : 1,
			"configterm" : 1
		},
		{
			"_id" : 2,
			"name" : "10.0.0.58:27017",
			"health" : 1,
			"state" : 2,
			"statestr" : "secondary",
			"uptime" : 113,
			"optime" : {
				"ts" : timestamp(1634711062, 1),
				"t" : numberlong(1)
			},
			"optimedurable" : {
				"ts" : timestamp(1634711062, 1),
				"t" : numberlong(1)
			},
			"optimedate" : isodate("2021-10-20t06:24:22z"),
			"optimedurabledate" : isodate("2021-10-20t06:24:22z"),
			"lastheartbeat" : isodate("2021-10-20t06:24:23.495z"),
			"lastheartbeatrecv" : isodate("2021-10-20t06:24:22.514z"),
			"pingms" : numberlong(0),
			"lastheartbeatmessage" : "",
			"syncsourcehost" : "10.0.0.56:27017",
			"syncsourceid" : 0,
			"infomessage" : "",
			"configversion" : 1,
			"configterm" : 1
		}
	],
	"ok" : 1,
	"$glestats" : {
		"lastoptime" : timestamp(1634710950, 1),
		"electionid" : objectid("7fffffff0000000000000001")
	},
	"lastcommittedoptime" : timestamp(1634711063, 1),
	"$clustertime" : {
		"clustertime" : timestamp(1634711063, 1),
		"signature" : {
			"hash" : bindata(0,"aaaaaaaaaaaaaaaaaaaaaaaaaaa="),
			"keyid" : numberlong(0)
		}
	},
	"operationtime" : timestamp(1634711063, 1)
}

5.配置部署shard1分片集群,启动shard1实例(三台执行同样操作)

cd /mongo/apps/conf
mongod --config mongo-shard1.yml

#查看端口40001是否启动
netstat -ntpl
active internet connections (only servers)
proto recv-q send-q local address           foreign address         state       pid/program name
tcp        0      0 0.0.0.0:40001           0.0.0.0:*               listen      5742/mongod
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               listen      5443/mongod
tcp        0      0 0.0.0.0:111             0.0.0.0:*               listen      1/systemd
tcp        0      0 0.0.0.0:22              0.0.0.0:*               listen      1139/sshd
tcp        0      0 127.0.0.1:631           0.0.0.0:*               listen      1133/cupsd
tcp        0      0 127.0.0.1:6010          0.0.0.0:*               listen      2490/sshd: root@pts
tcp        0      0 127.0.0.1:6011          0.0.0.0:*               listen      5189/sshd: root@pts
tcp6       0      0 :::111                  :::*                    listen      1/systemd
tcp6       0      0 :::22                   :::*                    listen      1139/sshd
tcp6       0      0 ::1:631                 :::*                    listen      1133/cupsd
tcp6       0      0 ::1:6010                :::*                    listen      2490/sshd: root@pts
tcp6       0      0 ::1:6011                :::*                    listen      5189/sshd: root@pts

6.连接一台实例,创建复制集

#连接mongo
mongo 10.0.0.56:40001

#配置初始化复制集
config={_id:"shard1",members:[ 
  {_id:0,host:"10.0.0.56:40001",priority:2},
  {_id:1,host:"10.0.0.57:40001",priority:1},
  {_id:2,host:"10.0.0.58:40001",arbiteronly:true}, 
]}

rs.initiate(config)
#检查状态
rs.status()

7.配置部署shard2分片集群,启动shard1实例(三台执行同样操作)

cd /mongo/apps/conf
mongod --config mongo-shard2.yml

#查看端口40002是否启动
netstat -ntpl
active internet connections (only servers)
proto recv-q send-q local address           foreign address         state       pid/program name
tcp        0      0 0.0.0.0:40001           0.0.0.0:*               listen      5742/mongod
tcp        0      0 0.0.0.0:40002           0.0.0.0:*               listen      5982/mongod
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               listen      5443/mongod
tcp        0      0 0.0.0.0:111             0.0.0.0:*               listen      1/systemd
tcp        0      0 0.0.0.0:22              0.0.0.0:*               listen      1139/sshd
tcp        0      0 127.0.0.1:631           0.0.0.0:*               listen      1133/cupsd
tcp        0      0 127.0.0.1:6010          0.0.0.0:*               listen      2490/sshd: root@pts
tcp        0      0 127.0.0.1:6011          0.0.0.0:*               listen      5189/sshd: root@pts
tcp6       0      0 :::111                  :::*                    listen      1/systemd
tcp6       0      0 :::22                   :::*                    listen      1139/sshd
tcp6       0      0 ::1:631                 :::*                    listen      1133/cupsd
tcp6       0      0 ::1:6010                :::*                    listen      2490/sshd: root@pts
tcp6       0      0 ::1:6011                :::*                    listen      5189/sshd: root@pts

8.连接第二个节点创建复制集
因为我们规划的shard2的主节点是10.0.0.57:40002,仲裁节点不能写数据,所以要连接10.0.0.57主机

#连接mongo
mongo 10.0.0.57:40002

#创建初始化复制集
config={_id:"shard2",members:[
  {_id:0,host:"10.0.0.56:40002",arbiteronly:true}, 
  {_id:1,host:"10.0.0.57:40002",priority:2}, 
  {_id:2,host:"10.0.0.58:40002",priority:1}, 
]}

rs.initiate(config)
#查看状态
rs.status()

9.配置部署shard3分片集群,启动shard3实例(三台执行同样操作)

cd /mongo/apps/conf/
mongod --config mongo-shard3.yml

##查看端口40003是否启动
netstat -ntpl
active internet connections (only servers)
proto recv-q send-q local address           foreign address         state       pid/program name
tcp        0      0 0.0.0.0:40001           0.0.0.0:*               listen      5742/mongod
tcp        0      0 0.0.0.0:40002           0.0.0.0:*               listen      5982/mongod
tcp        0      0 0.0.0.0:40003           0.0.0.0:*               listen      6454/mongod
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               listen      5443/mongod
tcp        0      0 0.0.0.0:111             0.0.0.0:*               listen      1/systemd
tcp        0      0 0.0.0.0:22              0.0.0.0:*               listen      1139/sshd
tcp        0      0 127.0.0.1:631           0.0.0.0:*               listen      1133/cupsd
tcp        0      0 127.0.0.1:6010          0.0.0.0:*               listen      2490/sshd: root@pts
tcp        0      0 127.0.0.1:6011          0.0.0.0:*               listen      5189/sshd: root@pts
tcp6       0      0 :::111                  :::*                    listen      1/systemd
tcp6       0      0 :::22                   :::*                    listen      1139/sshd
tcp6       0      0 ::1:631                 :::*                    listen      1133/cupsd
tcp6       0      0 ::1:6010                :::*                    listen      2490/sshd: root@pts
tcp6       0      0 ::1:6011                :::*                    listen      5189/sshd: root@pts

10.连接第三个节点(10.0.0.58:40003)创建复制集

#连接mongo
mongo 10.0.0.58:40003

#创建初始化复制集
config={_id:"shard3",members:[ 
  {_id:0,host:"10.0.0.56:40003",priority:1}, 
  {_id:1,host:"10.0.0.57:40003",arbiteronly:true}, 
  {_id:2,host:"10.0.0.58:40003",priority:2}, 
]}

rs.initiate(config)
#查看状态
rs.status()

11.配置部署路由节点

#路由节点启动登录用mongos
mongos --config mongo-route.yml

#连接添加分片到集群中
mongo 10.0.0.56:30000

sh.addshard("shard1/10.0.0.56:40001,10.0.0.57:40001,10.0.0.58:40001")
sh.addshard("shard2/10.0.0.56:40002,10.0.0.57:40002,10.0.0.58:40002")
sh.addshard("shard3/10.0.0.56:40003,10.0.0.57:40003,10.0.0.58:40003")

#查看分片状态
sh.status()

四,测试服务器分片功能

#查看所有库
mongos> show dbs
admin   0.000gb
config  0.003gb
#进入config
use config
#这里默认的chunk大小是64m,db.settings.find()可以看到这个值,这里为了测试看的清楚,把chunk调整为1m
db.settings.save({"_id":"chunksize","value":1})

模拟写入数据

#在tydb库的tyuser表中循环写入6万条数据
mongos> use tydb
mongos> show tables
mongos> for(i=1;i<=60000;i++){db.tyuser.insert({"id":i,"name":"ty"+i})}

启用数据库分片

mongos> sh.enablesharding("tydb")
#ok返回1
{
	"ok" : 1,
	"operationtime" : timestamp(1634716737, 2),
	"$clustertime" : {
		"clustertime" : timestamp(1634716737, 2),
		"signature" : {
			"hash" : bindata(0,"aaaaaaaaaaaaaaaaaaaaaaaaaaa="),
			"keyid" : numberlong(0)
		}
	}
}

启用表分片

mongos> sh.shardcollection(”tydb.tyuser",{"id":1})

查看分片情况

mongos> sh.status()

查看开启关闭平衡器

#开启
mongos> sh.startbalancer() #或者sh.startbalancer(true)
#关闭
mongos> sh.stopbalancer() #或者sh.stopbalancer(false)
#查看是否关闭
mongos> sh.getbalancerstate() #返回flase表示关闭

到此这篇关于centos 8 搭建mongodb4.4分片集群的文章就介绍到这了,更多相关mongodb分片集群内容请搜索以前的文章或继续浏览下面的相关文章希望大家以后多多支持!