当前位置:网站首页>Mongodb slice summary
Mongodb slice summary
2022-07-07 13:12:00 【cui_ yonghua】
The basic chapter ( Can solve the problem of 80% The problem of ):
MongoDB data type 、 Key concepts and shell Commonly used instructions
MongoDB Various additions to documents 、 to update 、 Delete operation summary
Advanced :
Other :
One . Slice Overview
Fragmentation (sharding)
Refer to : Split the data , The process of dispersing it in different machines , Sometimes you use partitions (partitioning) To express the concept . Spread the data across different machines , You don't need a powerful mainframe computer to store more data , You can meet MongoDB The demand for a huge increase in data volume .
When MongoDB When storing massive amounts of data , A machine may not be enough to store data , It may not be enough to provide acceptable read and write throughput . At this time , We can split data on multiple machines , Make database system can store and process more data .
Be careful :
Replica set : Can solve automatic failover , Master slave copy , colony . Problem solved : Data redundancy backup , High availability of Architecture ; But it can't solve the problem of single node pressure ( Hardware limitations , Concurrent access pressure )
Why use shards :
1、 The local disk is not large enough
2、 When the request volume is large, there will be insufficient memory .
3、 Vertical expansion is expensive ( Memory 、 disk 、cpu)
Two . Fragment cluster structure
stay MongoDB Use the partition cluster structure to distribute :
[ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-vs4UlO1S-1657022797641)(evernotecid://B1CD39FE-B044-413D-A086-0649DB3F0070/appyinxiangcom/26430792/ENResource/p1225)]
In the figure above, there are three main components as follows :Shard
: Used to store actual data blocks , In a real production environment shard server A character can be made up of several machines replica set To undertake , Prevent single point failure of the main engine
Config Server
:mongod example , Store the whole ClusterMetadata, These include chunk Information .
Query Routers
: Front-end routing , The client is connected by this , And make the whole cluster look like a single database , Front end applications can be used transparently .
3、 ... and . Slice instance
The port distribution of slice structure is as follows :
Shard Server 1:27031
Shard Server 2:27032
Config Server: 27100
Route Process: 27777
Step one : start-up Shard Server
sudo rm -rf /MongoDB/shard/s1 /MongoDB/shard/s2 /MongoDB/shard/log
sudo mkdir -p /MongoDB/shard/s1 /MongoDB/shard/s2 /MongoDB/shard/log
sudo mongod --port 27031 --dbpath=/MongoDB/shard/s1
sudo mongod --port 27032 --dbpath=/MongoDB/shard/s2
Step two : start-up Config Server
sudo rm -rf /MongoDB/shard/config
sudo mkdir -p /MongoDB/shard/config
sudo mongod --port 27100 --dbpath=/MongoDB/shard/config
Be careful : Here we can start up like ordinary mongodb Start like a service , No need to add —shardsvr and configsvr Parameters . Because the function of these two parameters is to change the startup port , So we can specify the port by ourselves .
Step three : start-up Route Process
mongos --port 27777 --configdb 192.168.17.129:27100
Step four : To configure Sharding
Next , We use MongoDB Shell Log in to mongos, add to Shard node
mongo admin --port 27777
MongoDB shell version: 2.0.7
connecting to: 127.0.0.1:27777/admin
mongos> db.runCommand({
addshard:"192.168.17.129:27031" })
{
"shardAdded" : "shard0000", "ok" : 1 }
......
mongos> db.runCommand({
addshard:"192.168.17.129:27032" })
{
"shardAdded" : "shard0009", "ok" : 1 }
Step five : For a database test Enable sharding
# Set the database stored in slices
mongos> db.runCommand({
enablesharding:"test" })
{
"ok" : 1 }
Step six : Yes collection Slice
mongos> db.runCommand({
shardcollection: "test.mycol", key: {
_id:1}})
{
"collectionsharded" : "test.mycol", "ok" : 1 }
Step seven : test
mongo test --port 27777
Output 10000 Data
use testvar
num =10000
for (var i=0;i<num;i++){
db.mycol.save({
'_id':i})
}
No major changes are needed in the program code , Connect ordinary... Directly according to mongo Like the database , Connect the database to the interface 27777
Step eight : Check the fragmentation
When checking the segmentation , Must be in config( Configure execution on the server ) And it has to be in admin( Such as mongo 127.0.0.1:27100/admin) Execute under set
mongo admin --port 27100 #config( Configure execution on the server )
sh.status()
# Output is as follows :
-- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("57cfcdfef06b33543fdeb52e")
}
shards:
{
"_id" : "shard0000", "host" : "localhost:27031" }
{
"_id" : "shard0001", "host" : "localhost:27032" }
active mongoses:
"3.2.7" : 1
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
1 : Success
databases:
{
"_id" : "test", "primary" : "shard0000", "partitioned" : true }
test.mycol
shard key: {
"_id" : 1 }
unique: false
balancing: true
chunks:
shard0000 2
shard0001 1
{
"_id" : {
"$minKey" : 1 } } -->> {
"_id" : 1 } on : shard0001 Timestamp(2, 0)
{
"_id" : 1 } -->> {
"_id" : 57 } on : shard0000 Timestamp(2, 1)
{
"_id" : 57 } -->> {
"_id" : {
"$maxKey" : 1 } } on : shard0000 Timestamp(1, 3)
Four . Hashed Sharding
The biggest advantage of choosing hash slice key is to ensure that the data is basically evenly distributed in each node , Use _id Do a simple test as a hash key :
mongo admin --port 27777
mongos> db.runCommand({
shardcollection: "test.myhash", key: {
_id:"hashed"}})
{
"collectionsharded" : "test.myhash", "ok" : 1 }
use test
var num =10000
for (var i=0;i<num;i++){
db.myhash.save({
'_id':i})
}
summary : Hash sharding hashes the provided slice keys into a very large long integer as the final slice key .
边栏推荐
- [learning notes] agc010
- [untitled]
- 共创软硬件协同生态:Graphcore IPU与百度飞桨的“联合提交”亮相MLPerf
- API query interface for free mobile phone number ownership
- How does MySQL create, delete, and view indexes?
- MATLAB中polarscatter函数使用
- 飞桨EasyDL实操范例:工业零件划痕自动识别
- Go语言学习笔记-结构体(Struct)
- 一文读懂数仓中的pg_stat
- PHP calls the pure IP database to return the specific address
猜你喜欢
leecode3. 无重复字符的最长子串
Cinnamon 任务栏网速
MySQL入门尝鲜
将数学公式在el-table里面展示出来
Introduce six open source protocols in detail (instructions for programmers)
Cloud detection 2020: self attention generation countermeasure network for cloud detection in high-resolution remote sensing images
DETR介绍
Practical example of propeller easydl: automatic scratch recognition of industrial parts
ESP32构解工程添加组件
飞桨EasyDL实操范例:工业零件划痕自动识别
随机推荐
Common text processing tools
Cookie and session comparison
一文读懂数仓中的pg_stat
shell 批量文件名(不含扩展名)小写改大写
ESP32 ① 编译环境
Shortcut key of Bash
关于 appium 启动 app 后闪退的问题 - (已解决)
Isprs2021/ remote sensing image cloud detection: a geographic information driven method and a new large-scale remote sensing cloud / snow detection data set
Cmu15445 (fall 2019) project 2 - hash table details
leecode3. 无重复字符的最长子串
《开源圆桌派》第十一期“冰与火之歌”——如何平衡开源与安全间的天然矛盾?
Smart cloud health listed: with a market value of HK $15billion, SIG Jingwei and Jingxin fund are shareholders
Coscon'22 community convening order is coming! Open the world, invite all communities to embrace open source and open a new world~
Star Enterprise Purdue technology layoffs: Tencent Sequoia was a shareholder who raised more than 1billion
抓细抓实抓好安全生产各项工作 全力确保人民群众生命财产安全
What are the benefits of ip2long?
工具箱之 IKVM.NET 项目新进展
Analysis of DHCP dynamic host setting protocol
- Oui. Migration entièrement automatisée de la Sous - base de données des tableaux d'effets sous net
认养一头牛冲刺A股:拟募资18.5亿 徐晓波持股近40%