關于CentOS 8 搭建MongoDB4.4分片集群的問題
一,簡介
1.分片
在MongoDB里面存在另一種集群,就是分片技術(shù),可以滿足MongoDB數(shù)據(jù)量大量增長的需求。
在MongoDB存儲海量數(shù)據(jù)時,一臺機器可能不足以存儲數(shù)據(jù),也可能不足以提供可接受的讀寫吞吐量。這時,我們就可以通過在多臺機器上分割數(shù)據(jù),使得數(shù)據(jù)庫系統(tǒng)能存儲和處理更多的數(shù)據(jù)。
2.為什么使用分片
- 復制所有的寫入操作到主節(jié)點
- 延遲的敏感數(shù)據(jù)會在主節(jié)點查詢
- 單個副本集限制在12個節(jié)點
- 當請求量巨大時會出現(xiàn)內(nèi)存不足
- 本地磁盤不足
- 垂直擴展價格昂貴
3.分片原理概述
分片就是把數(shù)據(jù)分成塊,再把塊存儲到不同的服務器上,MongoDB的分片是自動分片的,當用戶發(fā)送讀寫數(shù)據(jù)請求的時候,先經(jīng)過mongos這個路由層,mongos路由層去配置服務器請求分片的信息,再來判斷這個請求應該去哪一臺服務器上讀寫數(shù)據(jù)。
二,準備環(huán)境
- 操作系統(tǒng):CentOS Linux release 8.2.2004 (Core)
- MongoDB版本:v4.4.10
- IP:10.0.0.56 實例:mongos(30000) config(27017) shard1主節(jié)點(40001) shard2仲裁節(jié)點(40002) shard3副節(jié)點(40003)
- IP:10.0.0.57 實例:mongos(30000) config(27017) shard1副節(jié)點(40001) shard2主節(jié)點(40002) shard3仲裁節(jié)點(40003)
- IP:10.0.0.58 實例:mongos(30000) config(27017) shard1仲裁節(jié)點(40001) shard3副節(jié)點(40002) shard3主節(jié)點(40003)
三,集群配置部署
1.創(chuàng)建相應目錄(三臺服務器執(zhí)行相同操作)
mkdir -p /mongo/{data,logs,apps,run} mkdir -p /mongo/data/shard{1,2,3} mkdir -p /mongo/data/config mkdir -p /mongo/apps/conf
2.安裝MongoDB修改創(chuàng)建配置文件(三臺執(zhí)行相同操作)
安裝教程
安裝可以通過下載MongoDB安裝包,再進行配置環(huán)境變量。這里是直接配置yum源,通過yum源安裝的MongoDB,后面直接執(zhí)行mongod加所需配置文件路徑運行即可。
(1)mongo-config配置文件
vim /mongo/apps/conf/mongo-config.yml systemLog: destination: file #日志路徑 path: "/mongo/logs/mongo-config.log" logAppend: true storage: journal: enabled: true #數(shù)據(jù)存儲路徑 dbPath: "/mongo/data/config" engine: wiredTiger wiredTiger: engineConfig: cacheSizeGB: 12 processManagement: fork: true pidFilePath: "/mongo/run/mongo-config.pid" net: #這里ip可以設置為對應主機ip bindIp: 0.0.0.0 #端口 port: 27017 setParameter: enableLocalhostAuthBypass: true replication: #復制集名稱 replSetName: "mgconfig" sharding: #作為配置服務 clusterRole: configsvr
(2)mongo-shard1配置文件
vim /mongo/apps/conf/mongo-shard1.yml systemLog: destination: file path: "/mongo/logs/mongo-shard1.log" logAppend: true storage: journal: enabled: true dbPath: "/mongo/data/shard1" processManagement: fork: true pidFilePath: "/mongo/run/mongo-shard1.pid" net: bindIp: 0.0.0.0 #注意修改端口 port: 40001 setParameter: enableLocalhostAuthBypass: true replication: #復制集名稱 replSetName: "shard1" sharding: #作為分片服務 clusterRole: shardsvr
(3)mongo-shard2配置文件
vim /mongo/apps/conf/mongo-shard2.yml systemLog: destination: file path: "/mongo/logs/mongo-shard2.log" logAppend: true storage: journal: enabled: true dbPath: "/mongo/data/shard2" processManagement: fork: true pidFilePath: "/mongo/run/mongo-shard2.pid" net: bindIp: 0.0.0.0 #注意修改端口 port: 40002 setParameter: enableLocalhostAuthBypass: true replication: #復制集名稱 replSetName: "shard2" sharding: #作為分片服務 clusterRole: shardsvr
(4)mongo-shard3配置文件
vim /mongo/apps/conf/mongo-shard3.yml systemLog: destination: file path: "/mongo/logs/mongo-shard3.log" logAppend: true storage: journal: enabled: true dbPath: "/mongo/data/shard3" processManagement: fork: true pidFilePath: "/mongo/run/mongo-shard3.pid" net: bindIp: 0.0.0.0 #注意修改端口 port: 40003 setParameter: enableLocalhostAuthBypass: true replication: #復制集名稱 replSetName: "shard3" sharding: #作為分片服務 clusterRole: shardsvr
(5)mongo-route配置文件
vim /mongo/apps/conf/mongo-route.yml systemLog: destination: file #注意修改路徑 path: "/mongo/logs/mongo-route.log" logAppend: true processManagement: fork: true pidFilePath: "/mongo/run/mongo-route.pid" net: bindIp: 0.0.0.0 #注意修改端口 port: 30000 setParameter: enableLocalhostAuthBypass: true replication: localPingThresholdMs: 15 sharding: #關聯(lián)配置服務 configDB: mgconfig/10.0.0.56:27017,10.0.0.57:27017,10.0.0.58:27018
3.啟動mongo-config服務(三臺服務器執(zhí)行相同操作)
#關閉之前yum安裝的MongoDB systemctl stop mongod cd /mongo/apps/conf/ mongod --config mongo-config.yml #查看端口27017是否啟動 netstat -ntpl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1129/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1131/cupsd tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 2514/sshd: root@pts tcp 0 0 127.0.0.1:6011 0.0.0.0:* LISTEN 4384/sshd: root@pts tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 4905/mongod tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd tcp6 0 0 :::22 :::* LISTEN 1129/sshd tcp6 0 0 ::1:631 :::* LISTEN 1131/cupsd tcp6 0 0 ::1:6010 :::* LISTEN 2514/sshd: root@pts tcp6 0 0 ::1:6011 :::* LISTEN 4384/sshd: root@pts tcp6 0 0 :::111 :::* LISTEN 1/systemd
4.連接一臺實例,創(chuàng)建初始化復制集
#連接mongo mongo 10.0.0.56:27017 #配置初始化復制集,這里的mgconfig要和配置文件里的replSet的名稱一致 config={_id:"mgconfig",members:[ {_id:0,host:"10.0.0.56:27017"}, {_id:1,host:"10.0.0.57:27017"}, {_id:2,host:"10.0.0.58:27017"}, ]} rs.initiate(config) #ok返回1便是初始化成功 { "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(1634710950, 1), "electionId" : ObjectId("000000000000000000000000") }, "lastCommittedOpTime" : Timestamp(0, 0) } #檢查狀態(tài) rs.status() { "set" : "mgconfig", "date" : ISODate("2021-10-20T06:24:24.277Z"), "myState" : 1, "term" : NumberLong(1), "syncSourceHost" : "", "syncSourceId" : -1, "configsvr" : true, "heartbeatIntervalMillis" : NumberLong(2000), "majorityVoteCou
5.配置部署shard1分片集群,啟動shard1實例(三臺執(zhí)行同樣操作)
cd /mongo/apps/conf mongod --config mongo-shard1.yml #查看端口40001是否啟動 netstat -ntpl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:40001 0.0.0.0:* LISTEN 5742/mongod tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 5443/mongod tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1139/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1133/cupsd tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 2490/sshd: root@pts tcp 0 0 127.0.0.1:6011 0.0.0.0:* LISTEN 5189/sshd: root@pts tcp6 0 0 :::111 :::* LISTEN 1/systemd tcp6 0 0 :::22 :::* LISTEN 1139/sshd tcp6 0 0 ::1:631 :::* LISTEN 1133/cupsd tcp6 0 0 ::1:6010 :::* LISTEN 2490/sshd: root@pts tcp6 0 0 ::1:6011 :::* LISTEN 5189/sshd: root@pts
6.連接一臺實例,創(chuàng)建復制集
#連接mongo mongo 10.0.0.56:40001 #配置初始化復制集 config={_id:"shard1",members:[ {_id:0,host:"10.0.0.56:40001",priority:2}, {_id:1,host:"10.0.0.57:40001",priority:1}, {_id:2,host:"10.0.0.58:40001",arbiterOnly:true}, ]} rs.initiate(config) #檢查狀態(tài) rs.status()
7.配置部署shard2分片集群,啟動shard1實例(三臺執(zhí)行同樣操作)
cd /mongo/apps/conf mongod --config mongo-shard2.yml #查看端口40002是否啟動 netstat -ntpl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:40001 0.0.0.0:* LISTEN 5742/mongod tcp 0 0 0.0.0.0:40002 0.0.0.0:* LISTEN 5982/mongod tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 5443/mongod tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1139/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1133/cupsd tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 2490/sshd: root@pts tcp 0 0 127.0.0.1:6011 0.0.0.0:* LISTEN 5189/sshd: root@pts tcp6 0 0 :::111 :::* LISTEN 1/systemd tcp6 0 0 :::22 :::* LISTEN 1139/sshd tcp6 0 0 ::1:631 :::* LISTEN 1133/cupsd tcp6 0 0 ::1:6010 :::* LISTEN 2490/sshd: root@pts tcp6 0 0 ::1:6011 :::* LISTEN 5189/sshd: root@pts
8.連接第二個節(jié)點創(chuàng)建復制集
因為我們規(guī)劃的shard2的主節(jié)點是10.0.0.57:40002,仲裁節(jié)點不能寫數(shù)據(jù),所以要連接10.0.0.57主機
#連接mongo mongo 10.0.0.57:40002 #創(chuàng)建初始化復制集 config={_id:"shard2",members:[ {_id:0,host:"10.0.0.56:40002",arbiterOnly:true}, {_id:1,host:"10.0.0.57:40002",priority:2}, {_id:2,host:"10.0.0.58:40002",priority:1}, ]} rs.initiate(config) #查看狀態(tài) rs.status()
9.配置部署shard3分片集群,啟動shard3實例(三臺執(zhí)行同樣操作)
cd /mongo/apps/conf/ mongod --config mongo-shard3.yml ##查看端口40003是否啟動 netstat -ntpl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:40001 0.0.0.0:* LISTEN 5742/mongod tcp 0 0 0.0.0.0:40002 0.0.0.0:* LISTEN 5982/mongod tcp 0 0 0.0.0.0:40003 0.0.0.0:* LISTEN 6454/mongod tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 5443/mongod tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1139/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1133/cupsd tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 2490/sshd: root@pts tcp 0 0 127.0.0.1:6011 0.0.0.0:* LISTEN 5189/sshd: root@pts tcp6 0 0 :::111 :::* LISTEN 1/systemd tcp6 0 0 :::22 :::* LISTEN 1139/sshd tcp6 0 0 ::1:631 :::* LISTEN 1133/cupsd tcp6 0 0 ::1:6010 :::* LISTEN 2490/sshd: root@pts tcp6 0 0 ::1:6011 :::* LISTEN 5189/sshd: root@pts
10.連接第三個節(jié)點(10.0.0.58:40003)創(chuàng)建復制集
#連接mongo mongo 10.0.0.58:40003 #創(chuàng)建初始化復制集 config={_id:"shard3",members:[ {_id:0,host:"10.0.0.56:40003",priority:1}, {_id:1,host:"10.0.0.57:40003",arbiterOnly:true}, {_id:2,host:"10.0.0.58:40003",priority:2}, ]} rs.initiate(config) #查看狀態(tài) rs.status()
11.配置部署路由節(jié)點
#路由節(jié)點啟動登錄用mongos mongos --config mongo-route.yml #連接添加分片到集群中 mongo 10.0.0.56:30000 sh.addShard("shard1/10.0.0.56:40001,10.0.0.57:40001,10.0.0.58:40001") sh.addShard("shard2/10.0.0.56:40002,10.0.0.57:40002,10.0.0.58:40002") sh.addShard("shard3/10.0.0.56:40003,10.0.0.57:40003,10.0.0.58:40003") #查看分片狀態(tài) sh.status()
四,測試服務器分片功能
#查看所有庫 mongos> show dbs admin 0.000GB config 0.003GB #進入config use config #這里默認的chunk大小是64M,db.settings.find()可以看到這個值,這里為了測試看的清楚,把chunk調(diào)整為1M db.settings.save({"_id":"chunksize","value":1})
模擬寫入數(shù)據(jù)
#在tydb庫的tyuser表中循環(huán)寫入6萬條數(shù)據(jù) mongos> use tydb mongos> show tables mongos> for(i=1;i<=60000;i++){db.tyuser.insert({"id":i,"name":"ty"+i})}
啟用數(shù)據(jù)庫分片
mongos> sh.enableSharding("tydb") #ok返回1 { "ok" : 1, "operationTime" : Timestamp(1634716737, 2), "$clusterTime" : { "clusterTime" : Timestamp(1634716737, 2), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }
啟用表分片
mongos> sh.shardCollection(”tydb.tyuser",{"id":1})
查看分片情況
mongos> sh.status()
查看開啟關閉平衡器
#開啟 mongos> sh.startBalancer() #或者sh.startBalancer(true) #關閉 mongos> sh.stopBalancer() #或者sh.stopBalancer(false) #查看是否關閉 mongos> sh.getBalancerState() #返回flase表示關閉
到此這篇關于CentOS 8 搭建MongoDB4.4分片集群的文章就介紹到這了,更多相關MongoDB分片集群內(nèi)容請搜索本站以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持本站!
版權(quán)聲明:本站文章來源標注為YINGSOO的內(nèi)容版權(quán)均為本站所有,歡迎引用、轉(zhuǎn)載,請保持原文完整并注明來源及原文鏈接。禁止復制或仿造本網(wǎng)站,禁止在非www.sddonglingsh.com所屬的服務器上建立鏡像,否則將依法追究法律責任。本站部分內(nèi)容來源于網(wǎng)友推薦、互聯(lián)網(wǎng)收集整理而來,僅供學習參考,不代表本站立場,如有內(nèi)容涉嫌侵權(quán),請聯(lián)系alex-e#qq.com處理。