MongoDB的集群模式--Sharding(分片)
分片是数据跨多台机器存储,mongodb使用分片来支持具有非常大的数据集和高吞吐量操作的部署。
具有大型数据集或高吞吐量应用程序的数据库系统可能会挑战单个服务器的容量。例如,高查询率会耗尽服务器的cpu容量。工作集大小大于系统的ram会强调磁盘驱动器的i / o容量。
有两种解决系统增长的方法:垂直和水平缩放。
垂直扩展涉及增加单个服务器的容量,例如使用更强大的cpu,添加更多ram或增加存储空间量。可用技术的局限性可能会限制单个机器对于给定工作负载而言足够强大。此外,基于云的提供商基于可用的硬件配置具有硬性上限。结果,垂直缩放有实际的最大值。
水平扩展涉及划分系统数据集并加载多个服务器,添加其他服务器以根据需要增加容量。虽然单个机器的总体速度或容量可能不高,但每台机器处理整个工作负载的子集,可能提供比单个高速大容量服务器更高的效率。扩展部署容量只需要根据需要添加额外的服务器,这可能比单个机器的高端硬件的总体成本更低。权衡是基础架构和部署维护的复杂性增加。
mongodb支持通过分片进行水平扩展。
一、组件
- shard:每个分片包含分片数据的子集。每个分片都可以部署为副本集(replica set)。可以分片,不分片的数据存于主分片服务器上。部署为3成员副本集
- mongos:mongos充当查询路由器,提供客户端应用程序和分片集群之间的接口。可以部署多个mongos路由器。部署1个或者多个mongos
- config servers:配置服务器存储群集的元数据和配置设置。从mongodb 3.4开始,必须将配置服务器部署为3成员副本集
注意:应用程序或者客户端必须要连接mongos才能与集群的数据进行交互,永远不应连接到单个分片以执行读取或写入操作。
shard的replica set的架构图:
config servers的replica set的架构图:
分片策略
1、散列分片
- 使用散列索引在共享群集中分区数据。散列索引计算单个字段的哈希值作为索引值; 此值用作分片键。
- 使用散列索引解析查询时,mongodb会自动计算哈希值。应用程序也不会需要计算哈希值。
- 基于散列值的数据分布有助于更均匀的数据分布,尤其是在分片键单调变化的数据集中。
2、范围分片
- 基于分片键值将数据分成范围。然后根据分片键值为每个块分配一个范围。
- mongos可以将操作仅路由到包含所需数据的分片。
- 分片键的规划很重要,可能导致数据不能均匀分布。
二、部署
1、环境说明
服务器名称 | ip地址 | 操作系统版本 | mongodb版本 | 配置服务器(config server)端口 | 分片服务器1(shard server 1 | 分片服务器2(shard server 2) | 分片服务器3(shard server 3) | 功能 |
mongo1.example.net | 10.10.18.10 | centos7.5 | 4.0 | 27027(primary) | 27017(primary) | 27018(arbiter) | 27019(secondary) | 配置服务器和分片服务器 |
mongo2.example.net | 10.10.18.11 | centos7.5 | 4.0 | 27027(secondary) | 27017(secondary) |
27018(primary) | 27019(arbiter) | 配置服务器和分片服务器 |
mongo3.example.net | 10.10.18.12 | centos7.5 | 4.0 | 27027(secondary) | 27017(arbiter) | 27018(secondary) | 27019(primary) | 配置服务器和分片服务器 |
mongos.example.net | 192.168.11.10 | centos7.5 | 4.0 | mongos的端口:27017 | mongos |
官方推荐配置中使用逻辑dns,所以该文档中,将服务器名称和ip地址的dns映射关系写入到各服务器的/etc/hosts文件中。
2、部署mongodb
环境中4台服务器的mongodb的安装部署,详见:mongodb安装
创建环境需要的目录:
mkdir -p /data/mongodb/data/{configserver,shard1,shard2,shard3} mkdir -p /data/mongodb/{log,pid}
3、创建配置服务器(config server)的 replica set(副本集)
3台服务器上配置文件内容: /data/mongodb/configserver.conf
mongo1.example.net服务器上
systemlog: destination: file path: "/data/mongodb/log/configserver.log" logappend: true storage: dbpath: "/data/mongodb/data/configserver" journal: enabled: true wiredtiger: engineconfig: cachesizegb: 2 processmanagement: fork: true pidfilepath: "/data/mongodb/pid/configserver.pid" net: bindip: mongo1.example.net port: 27027 replication: replsetname: cs0 sharding: clusterrole: configsvr
mongo2.example.net服务器上
systemlog: destination: file path: "/data/mongodb/log/configserver.log" logappend: true storage: dbpath: "/data/mongodb/data/configserver" journal: enabled: true wiredtiger: engineconfig: cachesizegb: 2 processmanagement: fork: true pidfilepath: "/data/mongodb/pid/configserver.pid" net: bindip: mongo2.example.net port: 27027 replication: replsetname: cs0 sharding: clusterrole: configsvr
mongo3.example.net服务器上
systemlog: destination: file path: "/data/mongodb/log/configserver.log" logappend: true storage: dbpath: "/data/mongodb/data/configserver" journal: enabled: true wiredtiger: engineconfig: cachesizegb: 2 processmanagement: fork: true pidfilepath: "/data/mongodb/pid/configserver.pid" net: bindip: mongo3.example.net port: 27027 replication: replsetname: cs0 sharding: clusterrole: configsvr
启动三台服务器config server
mongod -f /data/mongodb/configserver.conf
连接到其中一个config server
mongo --host mongo1.example.net --port 27027
结果:
1 mongodb shell version v4.0.10 2 connecting to: mongodb://mongo1.example.net:27027/?gssapiservicename=mongodb 3 implicit session: session { "id" : uuid("1a4d4252-11d0-40bb-90da-f144692be88d") } 4 mongodb server version: 4.0.10 5 server has startup warnings: 6 2019-06-14t14:28:56.013+0800 i control [initandlisten] 7 2019-06-14t14:28:56.013+0800 i control [initandlisten] ** warning: access control is not enabled for the database. 8 2019-06-14t14:28:56.013+0800 i control [initandlisten] ** read and write access to data and configuration is unrestricted. 9 2019-06-14t14:28:56.013+0800 i control [initandlisten] ** warning: you are running this process as the root user, which is not recommended. 10 2019-06-14t14:28:56.013+0800 i control [initandlisten] 11 2019-06-14t14:28:56.013+0800 i control [initandlisten] 12 2019-06-14t14:28:56.013+0800 i control [initandlisten] ** warning: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 13 2019-06-14t14:28:56.013+0800 i control [initandlisten] ** we suggest setting it to 'never' 14 2019-06-14t14:28:56.014+0800 i control [initandlisten] 15 2019-06-14t14:28:56.014+0800 i control [initandlisten] ** warning: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 16 2019-06-14t14:28:56.014+0800 i control [initandlisten] ** we suggest setting it to 'never' 17 2019-06-14t14:28:56.014+0800 i control [initandlisten] 18 >
配置replica set
rs.initiate( { _id: "cs0", configsvr: true, members: [ { _id : 0, host : "mongo1.example.net:27027" }, { _id : 1, host : "mongo2.example.net:27027" }, { _id : 2, host : "mongo3.example.net:27027" } ] } )
结果:
1 { 2 "ok" : 1, 3 "operationtime" : timestamp(1560493908, 1), 4 "$glestats" : { 5 "lastoptime" : timestamp(1560493908, 1), 6 "electionid" : objectid("000000000000000000000000") 7 }, 8 "lastcommittedoptime" : timestamp(0, 0), 9 "$clustertime" : { 10 "clustertime" : timestamp(1560493908, 1), 11 "signature" : { 12 "hash" : bindata(0,"aaaaaaaaaaaaaaaaaaaaaaaaaaa="), 13 "keyid" : numberlong(0) 14 } 15 } 16 }
查看replica set的状态
cs0:primary> rs.status()
结果: 可以看出三个服务器:1个primary,2个secondary
1 { 2 "set" : "cs0", 3 "date" : isodate("2019-06-14t06:33:31.348z"), 4 "mystate" : 1, 5 "term" : numberlong(1), 6 "syncingto" : "", 7 "syncsourcehost" : "", 8 "syncsourceid" : -1, 9 "configsvr" : true, 10 "heartbeatintervalmillis" : numberlong(2000), 11 "optimes" : { 12 "lastcommittedoptime" : { 13 "ts" : timestamp(1560494006, 1), 14 "t" : numberlong(1) 15 }, 16 "readconcernmajorityoptime" : { 17 "ts" : timestamp(1560494006, 1), 18 "t" : numberlong(1) 19 }, 20 "appliedoptime" : { 21 "ts" : timestamp(1560494006, 1), 22 "t" : numberlong(1) 23 }, 24 "durableoptime" : { 25 "ts" : timestamp(1560494006, 1), 26 "t" : numberlong(1) 27 } 28 }, 29 "laststablecheckpointtimestamp" : timestamp(1560493976, 1), 30 "members" : [ 31 { 32 "_id" : 0, 33 "name" : "mongo1.example.net:27027", 34 "health" : 1, 35 "state" : 1, 36 "statestr" : "primary", 37 "uptime" : 277, 38 "optime" : { 39 "ts" : timestamp(1560494006, 1), 40 "t" : numberlong(1) 41 }, 42 "optimedate" : isodate("2019-06-14t06:33:26z"), 43 "syncingto" : "", 44 "syncsourcehost" : "", 45 "syncsourceid" : -1, 46 "infomessage" : "could not find member to sync from", 47 "electiontime" : timestamp(1560493919, 1), 48 "electiondate" : isodate("2019-06-14t06:31:59z"), 49 "configversion" : 1, 50 "self" : true, 51 "lastheartbeatmessage" : "" 52 }, 53 { 54 "_id" : 1, 55 "name" : "mongo2.example.net:27027", 56 "health" : 1, 57 "state" : 2, 58 "statestr" : "secondary", 59 "uptime" : 102, 60 "optime" : { 61 "ts" : timestamp(1560494006, 1), 62 "t" : numberlong(1) 63 }, 64 "optimedurable" : { 65 "ts" : timestamp(1560494006, 1), 66 "t" : numberlong(1) 67 }, 68 "optimedate" : isodate("2019-06-14t06:33:26z"), 69 "optimedurabledate" : isodate("2019-06-14t06:33:26z"), 70 "lastheartbeat" : isodate("2019-06-14t06:33:29.385z"), 71 "lastheartbeatrecv" : isodate("2019-06-14t06:33:29.988z"), 72 "pingms" : numberlong(0), 73 "lastheartbeatmessage" : "", 74 "syncingto" : "mongo1.example.net:27027", 75 "syncsourcehost" : "mongo1.example.net:27027", 76 "syncsourceid" : 0, 77 "infomessage" : "", 78 "configversion" : 1 79 }, 80 { 81 "_id" : 2, 82 "name" : "mongo3.example.net:27027", 83 "health" : 1, 84 "state" : 2, 85 "statestr" : "secondary", 86 "uptime" : 102, 87 "optime" : { 88 "ts" : timestamp(1560494006, 1), 89 "t" : numberlong(1) 90 }, 91 "optimedurable" : { 92 "ts" : timestamp(1560494006, 1), 93 "t" : numberlong(1) 94 }, 95 "optimedate" : isodate("2019-06-14t06:33:26z"), 96 "optimedurabledate" : isodate("2019-06-14t06:33:26z"), 97 "lastheartbeat" : isodate("2019-06-14t06:33:29.384z"), 98 "lastheartbeatrecv" : isodate("2019-06-14t06:33:29.868z"), 99 "pingms" : numberlong(0), 100 "lastheartbeatmessage" : "", 101 "syncingto" : "mongo1.example.net:27027", 102 "syncsourcehost" : "mongo1.example.net:27027", 103 "syncsourceid" : 0, 104 "infomessage" : "", 105 "configversion" : 1 106 } 107 ], 108 "ok" : 1, 109 "operationtime" : timestamp(1560494006, 1), 110 "$glestats" : { 111 "lastoptime" : timestamp(1560493908, 1), 112 "electionid" : objectid("7fffffff0000000000000001") 113 }, 114 "lastcommittedoptime" : timestamp(1560494006, 1), 115 "$clustertime" : { 116 "clustertime" : timestamp(1560494006, 1), 117 "signature" : { 118 "hash" : bindata(0,"aaaaaaaaaaaaaaaaaaaaaaaaaaa="), 119 "keyid" : numberlong(0) 120 } 121 } 122 }
创建管理用户
use admin db.createuser( { user: "myuseradmin", pwd: "abc123", roles: [{ role: "useradminanydatabase", db: "admin" },"readwriteanydatabase"] } )
开启config server的登录验证和内部验证
使用keyfiles进行内部认证,在其中一台服务器上创建keyfiles
openssl rand -base64 756 > /data/mongodb/keyfile chmod 400 /data/mongodb/keyfile
将这个keyfile文件分发到其它的三台服务器上,并保证权限400
/data/mongodb/configserver.conf 配置文件中开启认证
security: keyfile: "/data/mongodb/keyfile" clusterauthmode: "keyfile" authorization: "enabled"
然后依次关闭2个secondary,在关闭 primary
mongod -f /data/mongodb/configserver.conf --shutdown
依次开启primary和两个secondary
mongod -f /data/mongodb/configserver.conf
使用用户密码登录mongo
mongo --host mongo1.example.net --port 27027 -u myuseradmin --authenticationdatabase "admin" -p 'abc123'
注意:由于刚创建用户的时候没有给该用户管理集群的权限,所有此时登录后,能查看所有数据库,但是不能查看集群的状态信息。
1 cs0:primary> rs.status() 2 { 3 "operationtime" : timestamp(1560495861, 1), 4 "ok" : 0, 5 "errmsg" : "not authorized on admin to execute command { replsetgetstatus: 1.0, lsid: { id: uuid(\"59dd4dc0-b34f-43b9-a341-a2f43ec1dcfa\") }, $clustertime: { clustertime: timestamp(1560495849, 1), signature: { hash: bindata(0, a51371ec5aa54bb1b05ed9342bfbf03cbd87f2d9), keyid: 6702270356301807629 } }, $db: \"admin\" }", 6 "code" : 13, 7 "codename" : "unauthorized", 8 "$glestats" : { 9 "lastoptime" : timestamp(0, 0), 10 "electionid" : objectid("7fffffff0000000000000002") 11 }, 12 "lastcommittedoptime" : timestamp(1560495861, 1), 13 "$clustertime" : { 14 "clustertime" : timestamp(1560495861, 1), 15 "signature" : { 16 "hash" : bindata(0,"3uktpxxyu8wi1tys+u5vgewuega="), 17 "keyid" : numberlong("6702270356301807629") 18 } 19 } 20 } 21 cs0:primary> show dbs 22 admin 0.000gb 23 config 0.000gb 24 local 0.000gb
赋值该用户具有集群的管理权限
use admin
db.system.users.find() #查看当前的用户信息 db.grantrolestouser("myuseradmin", ["clusteradmin"])
查看集群信息
1 cs0:primary> rs.status() 2 { 3 "set" : "cs0", 4 "date" : isodate("2019-06-14t07:18:20.223z"), 5 "mystate" : 1, 6 "term" : numberlong(2), 7 "syncingto" : "", 8 "syncsourcehost" : "", 9 "syncsourceid" : -1, 10 "configsvr" : true, 11 "heartbeatintervalmillis" : numberlong(2000), 12 "optimes" : { 13 "lastcommittedoptime" : { 14 "ts" : timestamp(1560496690, 1), 15 "t" : numberlong(2) 16 }, 17 "readconcernmajorityoptime" : { 18 "ts" : timestamp(1560496690, 1), 19 "t" : numberlong(2) 20 }, 21 "appliedoptime" : { 22 "ts" : timestamp(1560496690, 1), 23 "t" : numberlong(2) 24 }, 25 "durableoptime" : { 26 "ts" : timestamp(1560496690, 1), 27 "t" : numberlong(2) 28 } 29 }, 30 "laststablecheckpointtimestamp" : timestamp(1560496652, 1), 31 "members" : [ 32 { 33 "_id" : 0, 34 "name" : "mongo1.example.net:27027", 35 "health" : 1, 36 "state" : 1, 37 "statestr" : "primary", 38 "uptime" : 1123, 39 "optime" : { 40 "ts" : timestamp(1560496690, 1), 41 "t" : numberlong(2) 42 }, 43 "optimedate" : isodate("2019-06-14t07:18:10z"), 44 "syncingto" : "", 45 "syncsourcehost" : "", 46 "syncsourceid" : -1, 47 "infomessage" : "", 48 "electiontime" : timestamp(1560495590, 1), 49 "electiondate" : isodate("2019-06-14t06:59:50z"), 50 "configversion" : 1, 51 "self" : true, 52 "lastheartbeatmessage" : "" 53 }, 54 { 55 "_id" : 1, 56 "name" : "mongo2.example.net:27027", 57 "health" : 1, 58 "state" : 2, 59 "statestr" : "secondary", 60 "uptime" : 1113, 61 "optime" : { 62 "ts" : timestamp(1560496690, 1), 63 "t" : numberlong(2) 64 }, 65 "optimedurable" : { 66 "ts" : timestamp(1560496690, 1), 67 "t" : numberlong(2) 68 }, 69 "optimedate" : isodate("2019-06-14t07:18:10z"), 70 "optimedurabledate" : isodate("2019-06-14t07:18:10z"), 71 "lastheartbeat" : isodate("2019-06-14t07:18:18.974z"), 72 "lastheartbeatrecv" : isodate("2019-06-14t07:18:19.142z"), 73 "pingms" : numberlong(0), 74 "lastheartbeatmessage" : "", 75 "syncingto" : "mongo1.example.net:27027", 76 "syncsourcehost" : "mongo1.example.net:27027", 77 "syncsourceid" : 0, 78 "infomessage" : "", 79 "configversion" : 1 80 }, 81 { 82 "_id" : 2, 83 "name" : "mongo3.example.net:27027", 84 "health" : 1, 85 "state" : 2, 86 "statestr" : "secondary", 87 "uptime" : 1107, 88 "optime" : { 89 "ts" : timestamp(1560496690, 1), 90 "t" : numberlong(2) 91 }, 92 "optimedurable" : { 93 "ts" : timestamp(1560496690, 1), 94 "t" : numberlong(2) 95 }, 96 "optimedate" : isodate("2019-06-14t07:18:10z"), 97 "optimedurabledate" : isodate("2019-06-14t07:18:10z"), 98 "lastheartbeat" : isodate("2019-06-14t07:18:18.999z"), 99 "lastheartbeatrecv" : isodate("2019-06-14t07:18:18.998z"), 100 "pingms" : numberlong(0), 101 "lastheartbeatmessage" : "", 102 "syncingto" : "mongo2.example.net:27027", 103 "syncsourcehost" : "mongo2.example.net:27027", 104 "syncsourceid" : 1, 105 "infomessage" : "", 106 "configversion" : 1 107 } 108 ], 109 "ok" : 1, 110 "operationtime" : timestamp(1560496690, 1), 111 "$glestats" : { 112 "lastoptime" : { 113 "ts" : timestamp(1560496631, 1), 114 "t" : numberlong(2) 115 }, 116 "electionid" : objectid("7fffffff0000000000000002") 117 }, 118 "lastcommittedoptime" : timestamp(1560496690, 1), 119 "$clustertime" : { 120 "clustertime" : timestamp(1560496690, 1), 121 "signature" : { 122 "hash" : bindata(0,"lhivw7weo81npti2imw16rean84="), 123 "keyid" : numberlong("6702270356301807629") 124 } 125 } 126 }
4、部署分片服务器1(shard1)以及replica set(副本集)
3台服务器上配置文件内容: /data/mongodb/shard1.conf
mongo1.example.net服务器上
systemlog: destination: file path: "/data/mongodb/log/shard1.log" logappend: true storage: dbpath: "/data/mongodb/data/shard1" journal: enabled: true wiredtiger: engineconfig: cachesizegb: 2 processmanagement: fork: true pidfilepath: "/data/mongodb/pid/shard1.pid" net: bindip: mongo1.example.net port: 27017 replication: replsetname: "shard1" sharding: clusterrole: shardsvr
mongo2.example.net服务器上
systemlog: destination: file path: "/data/mongodb/log/shard1.log" logappend: true storage: dbpath: "/data/mongodb/data/shard1" journal: enabled: true wiredtiger: engineconfig: cachesizegb: 2 processmanagement: fork: true pidfilepath: "/data/mongodb/pid/shard1.pid" net: bindip: mongo2.example.net port: 27017 replication: replsetname: "shard1" sharding: clusterrole: shardsvr
mongo3.example.net服务器上
systemlog: destination: file path: "/data/mongodb/log/shard1.log" logappend: true storage: dbpath: "/data/mongodb/data/shard1" journal: enabled: true wiredtiger: engineconfig: cachesizegb: 2 processmanagement: fork: true pidfilepath: "/data/mongodb/pid/shard1.pid" net: bindip: mongo3.example.net port: 27017 replication: replsetname: "shard1" sharding: clusterrole: shardsvr
开启三台服务器上shard
mongod -f /data/mongodb/shard1.conf
连接primary服务器的shard的副本集
mongo --host mongo1.example.net --port 27017
结果
1 mongodb shell version v4.0.10 2 connecting to: mongodb://mongo1.example.net:27017/?gssapiservicename=mongodb 3 implicit session: session { "id" : uuid("91e76384-cdae-411f-ab88-b7a8bd4555d1") } 4 mongodb server version: 4.0.10 5 server has startup warnings: 6 2019-06-14t15:32:39.243+0800 i control [initandlisten] 7 2019-06-14t15:32:39.243+0800 i control [initandlisten] ** warning: access control is not enabled for the database. 8 2019-06-14t15:32:39.243+0800 i control [initandlisten] ** read and write access to data and configuration is unrestricted. 9 2019-06-14t15:32:39.243+0800 i control [initandlisten] ** warning: you are running this process as the root user, which is not recommended. 10 2019-06-14t15:32:39.243+0800 i control [initandlisten] 11 2019-06-14t15:32:39.243+0800 i control [initandlisten] 12 2019-06-14t15:32:39.243+0800 i control [initandlisten] ** warning: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 13 2019-06-14t15:32:39.243+0800 i control [initandlisten] ** we suggest setting it to 'never' 14 2019-06-14t15:32:39.243+0800 i control [initandlisten] 15 2019-06-14t15:32:39.243+0800 i control [initandlisten] ** warning: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 16 2019-06-14t15:32:39.243+0800 i control [initandlisten] ** we suggest setting it to 'never' 17 2019-06-14t15:32:39.243+0800 i control [initandlisten] 18 >
配置replica set
rs.initiate( { _id : "shard1", members: [ { _id : 0, host : "mongo1.example.net:27017",priority:2 }, { _id : 1, host : "mongo2.example.net:27017",priority:1 }, { _id : 2, host : "mongo3.example.net:27017",arbiteronly:true } ] } )
注意:优先级priority的值越大,越容易选举成为primary
查看replica set的状态:
1 shard1:primary> rs.status() 2 { 3 "set" : "shard1", 4 "date" : isodate("2019-06-20t01:33:21.809z"), 5 "mystate" : 1, 6 "term" : numberlong(2), 7 "syncingto" : "", 8 "syncsourcehost" : "", 9 "syncsourceid" : -1, 10 "heartbeatintervalmillis" : numberlong(2000), 11 "optimes" : { 12 "lastcommittedoptime" : { 13 "ts" : timestamp(1560994393, 1), 14 "t" : numberlong(2) 15 }, 16 "readconcernmajorityoptime" : { 17 "ts" : timestamp(1560994393, 1), 18 "t" : numberlong(2) 19 }, 20 "appliedoptime" : { 21 "ts" : timestamp(1560994393, 1), 22 "t" : numberlong(2) 23 }, 24 "durableoptime" : { 25 "ts" : timestamp(1560994393, 1), 26 "t" : numberlong(2) 27 } 28 }, 29 "laststablecheckpointtimestamp" : timestamp(1560994373, 1), 30 "members" : [ 31 { 32 "_id" : 0, 33 "name" : "mongo1.example.net:27017", 34 "health" : 1, 35 "state" : 1, 36 "statestr" : "primary", 37 "uptime" : 43, 38 "optime" : { 39 "ts" : timestamp(1560994393, 1), 40 "t" : numberlong(2) 41 }, 42 "optimedate" : isodate("2019-06-20t01:33:13z"), 43 "syncingto" : "", 44 "syncsourcehost" : "", 45 "syncsourceid" : -1, 46 "infomessage" : "could not find member to sync from", 47 "electiontime" : timestamp(1560994371, 1), 48 "electiondate" : isodate("2019-06-20t01:32:51z"), 49 "configversion" : 1, 50 "self" : true, 51 "lastheartbeatmessage" : "" 52 }, 53 { 54 "_id" : 1, 55 "name" : "mongo2.example.net:27017", 56 "health" : 1, 57 "state" : 2, 58 "statestr" : "secondary", 59 "uptime" : 36, 60 "optime" : { 61 "ts" : timestamp(1560994393, 1), 62 "t" : numberlong(2) 63 }, 64 "optimedurable" : { 65 "ts" : timestamp(1560994393, 1), 66 "t" : numberlong(2) 67 }, 68 "optimedate" : isodate("2019-06-20t01:33:13z"), 69 "optimedurabledate" : isodate("2019-06-20t01:33:13z"), 70 "lastheartbeat" : isodate("2019-06-20t01:33:19.841z"), 71 "lastheartbeatrecv" : isodate("2019-06-20t01:33:21.164z"), 72 "pingms" : numberlong(0), 73 "lastheartbeatmessage" : "", 74 "syncingto" : "mongo1.example.net:27017", 75 "syncsourcehost" : "mongo1.example.net:27017", 76 "syncsourceid" : 0, 77 "infomessage" : "", 78 "configversion" : 1 79 }, 80 { 81 "_id" : 2, 82 "name" : "mongo3.example.net:27017", 83 "health" : 1, 84 "state" : 7, 85 "statestr" : "arbiter", 86 "uptime" : 32, 87 "lastheartbeat" : isodate("2019-06-20t01:33:19.838z"), 88 "lastheartbeatrecv" : isodate("2019-06-20t01:33:20.694z"), 89 "pingms" : numberlong(0), 90 "lastheartbeatmessage" : "", 91 "syncingto" : "", 92 "syncsourcehost" : "", 93 "syncsourceid" : -1, 94 "infomessage" : "", 95 "configversion" : 1 96 } 97 ], 98 "ok" : 1 99 }
结果: 可以看出三个服务器:1个primary,1个secondary,1一个arbiter
创建管理用户
use admin db.createuser( { user: "myuseradmin", pwd: "abc123", roles: [{ role: "useradminanydatabase", db: "admin" },"readwriteanydatabase","clusteradmin"] } )
开启shard1的登录验证和内部验证
security: keyfile: "/data/mongodb/keyfile" clusterauthmode: "keyfile" authorization: "enabled"
然后依次关闭arbiter、secondary、primary
mongod -f /data/mongodb/shard1.conf --shutdown
依次开启primary和两个secondary
mongod -f /data/mongodb/shard1.conf
使用用户密码登录mongo
mongo --host mongo1.example.net --port 27017 -u myuseradmin --authenticationdatabase "admin" -p 'abc123'
5、部署分片服务器2(shard2)以及replica set(副本集)
3台服务器上配置文件内容: /data/mongodb/shard2.conf
mongo1.example.net服务器上
systemlog: destination: file path: "/data/mongodb/log/shard2.log" logappend: true storage: dbpath: "/data/mongodb/data/shard2" journal: enabled: true wiredtiger: engineconfig: cachesizegb: 2 processmanagement: fork: true pidfilepath: "/data/mongodb/pid/shard2.pid" net: bindip: mongo1.example.net port: 27018 replication: replsetname: "shard2" sharding: clusterrole: shardsvr
mongo2.example.net服务器上
systemlog: destination: file path: "/data/mongodb/log/shard2.log" logappend: true storage: dbpath: "/data/mongodb/data/shard2" journal: enabled: true wiredtiger: engineconfig: cachesizegb: 2 processmanagement: fork: true pidfilepath: "/data/mongodb/pid/shard2.pid" net: bindip: mongo2.example.net port: 27018 replication: replsetname: "shard2" sharding: clusterrole: shardsvr
mongo3.example.net服务器上
systemlog: destination: file path: "/data/mongodb/log/shard2.log" logappend: true storage: dbpath: "/data/mongodb/data/shard2" journal: enabled: true wiredtiger: engineconfig: cachesizegb: 2 processmanagement: fork: true pidfilepath: "/data/mongodb/pid/shard2.pid" net: bindip: mongo3.example.net port: 27018 replication: replsetname: "shard2" sharding: clusterrole: shardsvr
开启三台服务器上shard
mongod -f /data/mongodb/shard2.conf
连接primary服务器的shard的副本集
mongo --host mongo2.example.net --port 27018
配置replica set(注意:三个服务器的角色发生了改变)
rs.initiate( { _id : "shard2", members: [ { _id : 0, host : "mongo1.example.net:27018",arbiteronly:true }, { _id : 1, host : "mongo2.example.net:27018",priority:2 }, { _id : 2, host : "mongo3.example.net:27018",priority:1 } ] } )
查看replica set的状态:
1 shard2:primary> rs.status() 2 { 3 "set" : "shard2", 4 "date" : isodate("2019-06-20t01:59:08.996z"), 5 "mystate" : 1, 6 "term" : numberlong(1), 7 "syncingto" : "", 8 "syncsourcehost" : "", 9 "syncsourceid" : -1, 10 "heartbeatintervalmillis" : numberlong(2000), 11 "optimes" : { 12 "lastcommittedoptime" : { 13 "ts" : timestamp(1560995943, 1), 14 "t" : numberlong(1) 15 }, 16 "readconcernmajorityoptime" : { 17 "ts" : timestamp(1560995943, 1), 18 "t" : numberlong(1) 19 }, 20 "appliedoptime" : { 21 "ts" : timestamp(1560995943, 1), 22 "t" : numberlong(1) 23 }, 24 "durableoptime" : { 25 "ts" : timestamp(1560995943, 1), 26 "t" : numberlong(1) 27 } 28 }, 29 "laststablecheckpointtimestamp" : timestamp(1560995913, 1), 30 "members" : [ 31 { 32 "_id" : 0, 33 "name" : "mongo1.example.net:27018", 34 "health" : 1, 35 "state" : 7, 36 "statestr" : "arbiter", 37 "uptime" : 107, 38 "lastheartbeat" : isodate("2019-06-20t01:59:08.221z"), 39 "lastheartbeatrecv" : isodate("2019-06-20t01:59:07.496z"), 40 "pingms" : numberlong(0), 41 "lastheartbeatmessage" : "", 42 "syncingto" : "", 43 "syncsourcehost" : "", 44 "syncsourceid" : -1, 45 "infomessage" : "", 46 "configversion" : 1 47 }, 48 { 49 "_id" : 1, 50 "name" : "mongo2.example.net:27018", 51 "health" : 1, 52 "state" : 1, 53 "statestr" : "primary", 54 "uptime" : 412, 55 "optime" : { 56 "ts" : timestamp(1560995943, 1), 57 "t" : numberlong(1) 58 }, 59 "optimedate" : isodate("2019-06-20t01:59:03z"), 60 "syncingto" : "", 61 "syncsourcehost" : "", 62 "syncsourceid" : -1, 63 "infomessage" : "could not find member to sync from", 64 "electiontime" : timestamp(1560995852, 1), 65 "electiondate" : isodate("2019-06-20t01:57:32z"), 66 "configversion" : 1, 67 "self" : true, 68 "lastheartbeatmessage" : "" 69 }, 70 { 71 "_id" : 2, 72 "name" : "mongo3.example.net:27018", 73 "health" : 1, 74 "state" : 2, 75 "statestr" : "secondary", 76 "uptime" : 107, 77 "optime" : { 78 "ts" : timestamp(1560995943, 1), 79 "t" : numberlong(1) 80 }, 81 "optimedurable" : { 82 "ts" : timestamp(1560995943, 1), 83 "t" : numberlong(1) 84 }, 85 "optimedate" : isodate("2019-06-20t01:59:03z"), 86 "optimedurabledate" : isodate("2019-06-20t01:59:03z"), 87 "lastheartbeat" : isodate("2019-06-20t01:59:08.220z"), 88 "lastheartbeatrecv" : isodate("2019-06-20t01:59:08.716z"), 89 "pingms" : numberlong(0), 90 "lastheartbeatmessage" : "", 91 "syncingto" : "mongo2.example.net:27018", 92 "syncsourcehost" : "mongo2.example.net:27018", 93 "syncsourceid" : 1, 94 "infomessage" : "", 95 "configversion" : 1 96 } 97 ], 98 "ok" : 1, 99 "operationtime" : timestamp(1560995943, 1), 100 "$clustertime" : { 101 "clustertime" : timestamp(1560995943, 1), 102 "signature" : { 103 "hash" : bindata(0,"aaaaaaaaaaaaaaaaaaaaaaaaaaa="), 104 "keyid" : numberlong(0) 105 } 106 } 107 }
结果: 可以看出三个服务器:1个primary,1个secondary,1一个arbiter
配置登录认证的用户请按照 shard1 的步骤
6、部署分片服务器3(shard3)以及replica set(副本集)
3台服务器上配置文件内容: /data/mongodb/shard3.conf
mongo1.example.net服务器上
systemlog: destination: file path: "/data/mongodb/log/shard3.log" logappend: true storage: dbpath: "/data/mongodb/data/shard3" journal: enabled: true wiredtiger: engineconfig: cachesizegb: 2 processmanagement: fork: true pidfilepath: "/data/mongodb/pid/shard3.pid" net: bindip: mongo1.example.net port: 27019 replication: replsetname: "shard3" sharding: clusterrole: shardsvr
mongo2.example.net服务器上
systemlog: destination: file path: "/data/mongodb/log/shard3.log" logappend: true storage: dbpath: "/data/mongodb/data/shard3" journal: enabled: true wiredtiger: engineconfig: cachesizegb: 2 processmanagement: fork: true pidfilepath: "/data/mongodb/pid/shard3.pid" net: bindip: mongo2.example.net port: 27019 replication: replsetname: "shard3" sharding: clusterrole: shardsvr
mongo3.example.net服务器上
systemlog: destination: file path: "/data/mongodb/log/shard3.log" logappend: true storage: dbpath: "/data/mongodb/data/shard3" journal: enabled: true wiredtiger: engineconfig: cachesizegb: 2 processmanagement: fork: true pidfilepath: "/data/mongodb/pid/shard3.pid" net: bindip: mongo3.example.net port: 27019 replication: replsetname: "shard3" sharding: clusterrole: shardsvr
开启三台服务器上shard
mongod -f /data/mongodb/shard3.conf
连接primary服务器的shard的副本集
mongo --host mongo3.example.net --port 27019
配置replica set(注意:三个服务器的角色发生了改变)
rs.initiate( { _id : "shard3", members: [ { _id : 0, host : "mongo1.example.net:27019",priority:1 }, { _id : 1, host : "mongo2.example.net:27019",arbiteronly:true }, { _id : 2, host : "mongo3.example.net:27019",priority:2 } ] } )
查看replica set的状态:
1 shard3:primary> rs.status() 2 { 3 "set" : "shard3", 4 "date" : isodate("2019-06-20t02:21:56.990z"), 5 "mystate" : 1, 6 "term" : numberlong(1), 7 "syncingto" : "", 8 "syncsourcehost" : "", 9 "syncsourceid" : -1, 10 "heartbeatintervalmillis" : numberlong(2000), 11 "optimes" : { 12 "lastcommittedoptime" : { 13 "ts" : timestamp(1560997312, 2), 14 "t" : numberlong(1) 15 }, 16 "readconcernmajorityoptime" : { 17 "ts" : timestamp(1560997312, 2), 18 "t" : numberlong(1) 19 }, 20 "appliedoptime" : { 21 "ts" : timestamp(1560997312, 2), 22 "t" : numberlong(1) 23 }, 24 "durableoptime" : { 25 "ts" : timestamp(1560997312, 2), 26 "t" : numberlong(1) 27 } 28 }, 29 "laststablecheckpointtimestamp" : timestamp(1560997312, 1), 30 "members" : [ 31 { 32 "_id" : 0, 33 "name" : "mongo1.example.net:27019", 34 "health" : 1, 35 "state" : 2, 36 "statestr" : "secondary", 37 "uptime" : 17, 38 "optime" : { 39 "ts" : timestamp(1560997312, 2), 40 "t" : numberlong(1) 41 }, 42 "optimedurable" : { 43 "ts" : timestamp(1560997312, 2), 44 "t" : numberlong(1) 45 }, 46 "optimedate" : isodate("2019-06-20t02:21:52z"), 47 "optimedurabledate" : isodate("2019-06-20t02:21:52z"), 48 "lastheartbeat" : isodate("2019-06-20t02:21:56.160z"), 49 "lastheartbeatrecv" : isodate("2019-06-20t02:21:55.155z"), 50 "pingms" : numberlong(0), 51 "lastheartbeatmessage" : "", 52 "syncingto" : "mongo3.example.net:27019", 53 "syncsourcehost" : "mongo3.example.net:27019", 54 "syncsourceid" : 2, 55 "infomessage" : "", 56 "configversion" : 1 57 }, 58 { 59 "_id" : 1, 60 "name" : "mongo2.example.net:27019", 61 "health" : 1, 62 "state" : 7, 63 "statestr" : "arbiter", 64 "uptime" : 17, 65 "lastheartbeat" : isodate("2019-06-20t02:21:56.159z"), 66 "lastheartbeatrecv" : isodate("2019-06-20t02:21:55.021z"), 67 "pingms" : numberlong(0), 68 "lastheartbeatmessage" : "", 69 "syncingto" : "", 70 "syncsourcehost" : "", 71 "syncsourceid" : -1, 72 "infomessage" : "", 73 "configversion" : 1 74 }, 75 { 76 "_id" : 2, 77 "name" : "mongo3.example.net:27019", 78 "health" : 1, 79 "state" : 1, 80 "statestr" : "primary", 81 "uptime" : 45, 82 "optime" : { 83 "ts" : timestamp(1560997312, 2), 84 "t" : numberlong(1) 85 }, 86 "optimedate" : isodate("2019-06-20t02:21:52z"), 87 "syncingto" : "", 88 "syncsourcehost" : "", 89 "syncsourceid" : -1, 90 "infomessage" : "could not find member to sync from", 91 "electiontime" : timestamp(1560997310, 1), 92 "electiondate" : isodate("2019-06-20t02:21:50z"), 93 "configversion" : 1, 94 "self" : true, 95 "lastheartbeatmessage" : "" 96 } 97 ], 98 "ok" : 1, 99 "operationtime" : timestamp(1560997312, 2), 100 "$clustertime" : { 101 "clustertime" : timestamp(1560997312, 2), 102 "signature" : { 103 "hash" : bindata(0,"aaaaaaaaaaaaaaaaaaaaaaaaaaa="), 104 "keyid" : numberlong(0) 105 } 106 } 107 }
结果: 可以看出三个服务器:1个primary,1个secondary,1一个arbiter
配置登录认证的用户请按照 shard1 的步骤
7、配置mongos服务器去连接分片集群
mongos.example.net 服务器上mongos的配置文件 /data/mongodb/mongos.conf
systemlog: destination: file path: "/data/mongodb/log/mongos.log" logappend: true processmanagement: fork: true net: port: 27017 bindip: mongos.example.net sharding: configdb: "cs0/mongo1.example.net:27027,mongo2.example.net:27027,mongo3.example.net:27027" security: keyfile: "/data/mongodb/keyfile" clusterauthmode: "keyfile"
启动mongos服务
mongos -f /data/mongodb/mongos.conf
连接mongos
mongo --host mongos.example.net --port 27017 -u myuseradmin --authenticationdatabase "admin" -p 'abc123'
查看当前集群结果:
mongos> sh.status() --- sharding status --- sharding version: { "_id" : 1, "mincompatibleversion" : 5, "currentversion" : 6, "clusterid" : objectid("5d0af6ed4fa51757cd032108") } shards: active mongoses: autosplit: currently enabled: yes balancer: currently enabled: yes currently running: no failed balancer rounds in last 5 attempts: 0 migration results for the last 24 hours: no recent migrations databases: { "_id" : "config", "primary" : "config", "partitioned" : true }
在集群中先加入shard1、shard2,剩余shard3我们在插入数据有在进行加入(模拟实现扩容)。
sh.addshard("shard1/mongo1.example.net:27017,mongo2.example.net:27017,mongo3.example.net:27017") sh.addshard("shard2/mongo1.example.net:27018,mongo2.example.net:27018,mongo3.example.net:27018")
结果:
1 mongos> sh.addshard("shard1/mongo1.example.net:27017,mongo2.example.net:27017,mongo3.example.net:27017") 2 { 3 "shardadded" : "shard1", 4 "ok" : 1, 5 "operationtime" : timestamp(1561009140, 7), 6 "$clustertime" : { 7 "clustertime" : timestamp(1561009140, 7), 8 "signature" : { 9 "hash" : bindata(0,"2je9fsnfmfbmhp+x/6d98b5tlh8="), 10 "keyid" : numberlong("6704442493062086684") 11 } 12 } 13 } 14 mongos> sh.addshard("shard2/mongo1.example.net:27018,mongo2.example.net:27018,mongo3.example.net:27018") 15 { 16 "shardadded" : "shard2", 17 "ok" : 1, 18 "operationtime" : timestamp(1561009148, 5), 19 "$clustertime" : { 20 "clustertime" : timestamp(1561009148, 6), 21 "signature" : { 22 "hash" : bindata(0,"8fvjucy8kcrmu5nb9pyilj0bzlk="), 23 "keyid" : numberlong("6704442493062086684") 24 } 25 } 26 }
查看集群的状态
mongos> sh.status() --- sharding status --- sharding version: { "_id" : 1, "mincompatibleversion" : 5, "currentversion" : 6, "clusterid" : objectid("5d0af6ed4fa51757cd032108") } shards: { "_id" : "shard1", "host" : "shard1/mongo1.example.net:27017,mongo2.example.net:27017", "state" : 1 } { "_id" : "shard2", "host" : "shard2/mongo2.example.net:27018,mongo3.example.net:27018", "state" : 1 } active mongoses: "4.0.10" : 1 autosplit: currently enabled: yes balancer: currently enabled: yes currently running: no failed balancer rounds in last 5 attempts: 0 migration results for the last 24 hours: no recent migrations databases: { "_id" : "config", "primary" : "config", "partitioned" : true }
8、测试
为了便于测试,设置分片chunk的大小为1m
use config db.settings.save({"_id":"chunksize","value":1})
在连接mongos后,执行创建数据库,并启用分片存储
sh.enablesharding("user_center")
创建 "user_center"数据库,并启用分片,查看结果:
1 mongos> sh.status() 2 --- sharding status --- 3 sharding version: { 4 "_id" : 1, 5 "mincompatibleversion" : 5, 6 "currentversion" : 6, 7 "clusterid" : objectid("5d0af6ed4fa51757cd032108") 8 } 9 shards: 10 { "_id" : "shard1", "host" : "shard1/mongo1.example.net:27017,mongo2.example.net:27017", "state" : 1 } 11 { "_id" : "shard2", "host" : "shard2/mongo2.example.net:27018,mongo3.example.net:27018", "state" : 1 } 12 active mongoses: 13 "4.0.10" : 1 14 autosplit: 15 currently enabled: yes 16 balancer: 17 currently enabled: yes 18 currently running: no 19 failed balancer rounds in last 5 attempts: 0 20 migration results for the last 24 hours: 21 no recent migrations 22 databases: 23 { "_id" : "config", "primary" : "config", "partitioned" : true } 24 config.system.sessions 25 shard key: { "_id" : 1 } 26 unique: false 27 balancing: true 28 chunks: 29 shard1 1 30 { "_id" : { "$minkey" : 1 } } -->> { "_id" : { "$maxkey" : 1 } } on : shard1 timestamp(1, 0) 31 { "_id" : "user_center", "primary" : "shard1", "partitioned" : true, "version" : { "uuid" : uuid("3b05ccb5-796a-4e9e-a36e-99b860b6bee0"), "lastmod" : 1 } }
创建 "users" 集合
sh.shardcollection("user_center.users",{"name":1}) #数据库user_center中users集合使用了片键{"name":1},这个片键通过字段name的值进行数据分配
现在查看集群状态
1 mongos> sh.status() 2 --- sharding status --- 3 sharding version: { 4 "_id" : 1, 5 "mincompatibleversion" : 5, 6 "currentversion" : 6, 7 "clusterid" : objectid("5d0af6ed4fa51757cd032108") 8 } 9 shards: 10 { "_id" : "shard1", "host" : "shard1/mongo1.example.net:27017,mongo2.example.net:27017", "state" : 1 } 11 { "_id" : "shard2", "host" : "shard2/mongo2.example.net:27018,mongo3.example.net:27018", "state" : 1 } 12 active mongoses: 13 "4.0.10" : 1 14 autosplit: 15 currently enabled: yes 16 balancer: 17 currently enabled: yes 18 currently running: no 19 failed balancer rounds in last 5 attempts: 0 20 migration results for the last 24 hours: 21 no recent migrations 22 databases: 23 { "_id" : "config", "primary" : "config", "partitioned" : true } 24 config.system.sessions 25 shard key: { "_id" : 1 } 26 unique: false 27 balancing: true 28 chunks: 29 shard1 1 30 { "_id" : { "$minkey" : 1 } } -->> { "_id" : { "$maxkey" : 1 } } on : shard1 timestamp(1, 0) 31 { "_id" : "user_center", "primary" : "shard2", "partitioned" : true, "version" : { "uuid" : uuid("33c79b3f-aa18-4755-a5e8-b8f7f3d05893"), "lastmod" : 1 } } 32 user_center.users 33 shard key: { "name" : 1 } 34 unique: false 35 balancing: true 36 chunks: 37 shard2 1 38 { "name" : { "$minkey" : 1 } } -->> { "name" : { "$maxkey" : 1 } } on : shard2 timestamp(1, 0)
写pyhton脚本插入数据
#enconding:utf8 import pymongo,string,random def random_name(): str_args = string.ascii_letters name_list = random.sample(str_args,5) random.shuffle(name_list) return ''.join(name_list) def random_age(): age_args = string.digits age_list = random.sample(age_args,2) random.shuffle(age_list) return int(''.join(age_list)) def insert_data_to_mongo(url,dbname,collections_name): print(url) client = pymongo.mongoclient(url) db = client[dbname] collections = db[collections_name] for i in range(1,100000): name = random_name() collections.insert({"name" : name , "age" : random_age(), "status" : "pending"}) print("insert ",name) if __name__ == "__main__": mongo_url="mongodb://myuseradmin:abc123@192.168.11.10:27017/?maxpoolsize=100&minpoolsize=10&maxidletimems=600000" mongo_db="user_center" mongo_collections="users" insert_data_to_mongo(mongo_url,mongo_db,mongo_collections)
插入数据后查看此时集群的状态:
mongos> sh.status() --- sharding status --- sharding version: { "_id" : 1, "mincompatibleversion" : 5, "currentversion" : 6, "clusterid" : objectid("5d0af6ed4fa51757cd032108") } shards: { "_id" : "shard1", "host" : "shard1/mongo1.example.net:27017,mongo2.example.net:27017", "state" : 1 } { "_id" : "shard2", "host" : "shard2/mongo2.example.net:27018,mongo3.example.net:27018", "state" : 1 } active mongoses: "4.0.10" : 1 autosplit: currently enabled: yes balancer: currently enabled: yes currently running: no failed balancer rounds in last 5 attempts: 0 migration results for the last 24 hours: 3 : success databases: { "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: shard1 1 { "_id" : { "$minkey" : 1 } } -->> { "_id" : { "$maxkey" : 1 } } on : shard1 timestamp(1, 0) { "_id" : "user_center", "primary" : "shard2", "partitioned" : true, "version" : { "uuid" : uuid("33c79b3f-aa18-4755-a5e8-b8f7f3d05893"), "lastmod" : 1 } } user_center.users shard key: { "name" : 1 } unique: false balancing: true chunks: shard1 9 shard2 8 { "name" : { "$minkey" : 1 } } -->> { "name" : "abxew" } on : shard1 timestamp(2, 0) { "name" : "abxew" } -->> { "name" : "ekdct" } on : shard1 timestamp(3, 11) { "name" : "ekdct" } -->> { "name" : "itgcx" } on : shard1 timestamp(3, 12) { "name" : "itgcx" } -->> { "name" : "jkooz" } on : shard1 timestamp(3, 13) { "name" : "jkooz" } -->> { "name" : "nslcy" } on : shard1 timestamp(4, 2) { "name" : "nslcy" } -->> { "name" : "rbray" } on : shard1 timestamp(4, 3) { "name" : "rbray" } -->> { "name" : "sqvzq" } on : shard1 timestamp(4, 4) { "name" : "sqvzq" } -->> { "name" : "txppm" } on : shard1 timestamp(3, 4) { "name" : "txppm" } -->> { "name" : "yeujn" } on : shard1 timestamp(4, 0) { "name" : "yeujn" } -->> { "name" : "colra" } on : shard2 timestamp(3, 9) { "name" : "colra" } -->> { "name" : "dftns" } on : shard2 timestamp(3, 10) { "name" : "dftns" } -->> { "name" : "hlwfz" } on : shard2 timestamp(3, 14) { "name" : "hlwfz" } -->> { "name" : "lvqzu" } on : shard2 timestamp(3, 15) { "name" : "lvqzu" } -->> { "name" : "mnlgp" } on : shard2 timestamp(3, 16) { "name" : "mnlgp" } -->> { "name" : "oilav" } on : shard2 timestamp(3, 7) { "name" : "oilav" } -->> { "name" : "wjwqi" } on : shard2 timestamp(4, 1) { "name" : "wjwqi" } -->> { "name" : { "$maxkey" : 1 } } on : shard2 timestamp(3, 1)
可以看出,数据分别再shard1、shard2分片上。
将shard3分片也加入到集群中来
mongos> sh.addshard("shard3/mongo1.example.net:27019,mongo2.example.net:27019,mongo3.example.net:27019")
在查看集群的状态:
mongos> sh.status() --- sharding status --- sharding version: { "_id" : 1, "mincompatibleversion" : 5, "currentversion" : 6, "clusterid" : objectid("5d0af6ed4fa51757cd032108") } shards: { "_id" : "shard1", "host" : "shard1/mongo1.example.net:27017,mongo2.example.net:27017", "state" : 1 } { "_id" : "shard2", "host" : "shard2/mongo2.example.net:27018,mongo3.example.net:27018", "state" : 1 } { "_id" : "shard3", "host" : "shard3/mongo1.example.net:27019,mongo3.example.net:27019", "state" : 1 } active mongoses: "4.0.10" : 1 autosplit: currently enabled: yes balancer: currently enabled: yes currently running: no failed balancer rounds in last 5 attempts: 0 migration results for the last 24 hours: 8 : success databases: { "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: shard1 1 { "_id" : { "$minkey" : 1 } } -->> { "_id" : { "$maxkey" : 1 } } on : shard1 timestamp(1, 0) { "_id" : "user_center", "primary" : "shard2", "partitioned" : true, "version" : { "uuid" : uuid("33c79b3f-aa18-4755-a5e8-b8f7f3d05893"), "lastmod" : 1 } } user_center.users shard key: { "name" : 1 } unique: false balancing: true chunks: shard1 6 shard2 6 shard3 5 { "name" : { "$minkey" : 1 } } -->> { "name" : "abxew" } on : shard3 timestamp(5, 0) { "name" : "abxew" } -->> { "name" : "ekdct" } on : shard3 timestamp(7, 0) { "name" : "ekdct" } -->> { "name" : "itgcx" } on : shard3 timestamp(9, 0) { "name" : "itgcx" } -->> { "name" : "jkooz" } on : shard1 timestamp(9, 1) { "name" : "jkooz" } -->> { "name" : "nslcy" } on : shard1 timestamp(4, 2) { "name" : "nslcy" } -->> { "name" : "rbray" } on : shard1 timestamp(4, 3) { "name" : "rbray" } -->> { "name" : "sqvzq" } on : shard1 timestamp(4, 4) { "name" : "sqvzq" } -->> { "name" : "txppm" } on : shard1 timestamp(5, 1) { "name" : "txppm" } -->> { "name" : "yeujn" } on : shard1 timestamp(4, 0) { "name" : "yeujn" } -->> { "name" : "colra" } on : shard3 timestamp(6, 0) { "name" : "colra" } -->> { "name" : "dftns" } on : shard3 timestamp(8, 0) { "name" : "dftns" } -->> { "name" : "hlwfz" } on : shard2 timestamp(3, 14) { "name" : "hlwfz" } -->> { "name" : "lvqzu" } on : shard2 timestamp(3, 15) { "name" : "lvqzu" } -->> { "name" : "mnlgp" } on : shard2 timestamp(3, 16) { "name" : "mnlgp" } -->> { "name" : "oilav" } on : shard2 timestamp(8, 1) { "name" : "oilav" } -->> { "name" : "wjwqi" } on : shard2 timestamp(4, 1) { "name" : "wjwqi" } -->> { "name" : { "$maxkey" : 1 } } on : shard2 timestamp(6, 1)
加入后,集群的分片数据重新平衡调整,有一部分数据分布到shard3上。
9、备份和恢复
备份
备份的时候需要锁定配置服务器(configserver)和分片服务器(shard)
在备份前查看当前数据库中数据总条数
mongos> db.users.find().count() 99999
然后启动前面的python脚本,可以在脚本中添加time.sleep来控制插入的频率。
在mongos服务器上停止平衡器。
mongos> sh.stopbalancer()
锁定配置服务器和各分片服务器,登录配置服务器和各分片服务器的secondary执行命令
db.fsynclock()
开始备份数据库
mongodump -h mongo2.example.net --port 27027 --authenticationdatabase admin -u myuseradmin -p abc123 -o /data/backup/config mongodump -h mongo2.example.net --port 27017 --authenticationdatabase admin -u myuseradmin -p abc123 -o /data/backup/shard1 mongodump -h mongo3.example.net --port 27018 --authenticationdatabase admin -u myuseradmin -p abc123 -o /data/backup/shard2 mongodump -h mongo1.example.net --port 27019 --authenticationdatabase admin -u myuseradmin -p abc123 -o /data/backup/shard3
锁定配置服务器和各分片服务器
db.fsyncunlock()
在mongos中开启平衡器
sh.setbalancerstate(true);
在备份的过程中不会影响到数据的写入,备份后查看此时的数据
mongos> db.users.find().count()
107874
恢复
将shard1分片服务器1中的数据库删除
shard1:primary> use user_center switched to db user_center shard1:primary> db.dropdatabase() { "dropped" : "user_center", "ok" : 1, "operationtime" : timestamp(1561022404, 2), "$glestats" : { "lastoptime" : { "ts" : timestamp(1561022404, 2), "t" : numberlong(2) }, "electionid" : objectid("7fffffff0000000000000002") }, "lastcommittedoptime" : timestamp(1561022404, 1), "$configserverstate" : { "optime" : { "ts" : timestamp(1561022395, 1), "t" : numberlong(2) } }, "$clustertime" : { "clustertime" : timestamp(1561022404, 2), "signature" : { "hash" : bindata(0,"go1yqdvdz6ojbxdvm94nopnnjtm="), "keyid" : numberlong("6704442493062086684") } } }
然后使用刚备份的数据库进行恢复
mongorestore -h mongo1.example.net --port 27017 --authenticationdatabase admin -u myuseradmin -p abc123 -d user_center /data/backup/shard1/user_center
2019-06-20t17:20:34.325+0800 the --db and --collection args should only be used when restoring from a bson file. other uses are deprecated and will not exist in the future; use --nsinclude instead
2019-06-20t17:20:34.326+0800 building a list of collections to restore from /data/backup/shard1/user_center dir
2019-06-20t17:20:34.356+0800 reading metadata for user_center.users from /data/backup/shard1/user_center/users.metadata.json
2019-06-20t17:20:34.410+0800 restoring user_center.users from /data/backup/shard1/user_center/users.bson
2019-06-20t17:20:36.836+0800 restoring indexes for collection user_center.users from metadata
2019-06-20t17:20:37.093+0800 finished restoring user_center.users (30273 documents)
2019-06-20t17:20:37.093+0800 done
根据上述步骤恢复shard2、shard3的数据
最后恢复的结果:
mongos> db.users.find().count() 100013
这个应该是我在锁的时候插入的数据。
上一篇: 差一点权倾天下的吕雉,真的和其他人有染?
下一篇: OO第四单元总结