当前位置:网站首页>Mongodb slice summary

Mongodb slice summary

2022-07-07 13:12:00 cui_ yonghua

The basic chapter ( Can solve the problem of 80% The problem of ):

  1. MongoDB Overview 、 Application scenarios 、 Download mode 、 Connection mode and development history, etc

  2. MongoDB data type 、 Key concepts and shell Commonly used instructions

  3. MongoDB Various additions to documents 、 to update 、 Delete operation summary

  4. MongoDB Summary of various query operations

  5. MongoDB Summarize the various operations of the column

  6. MongoDB Summary of index operations in

Advanced :

  1. MongoDB Summary of aggregation operations

  2. MongoDB Import and export of 、 Backup recovery summary

  3. MongoDB Summary of user management

  4. MongoDB Copy ( Replica set ) summary

  5. MongoDB Slice summary

  6. MongoDB meet spark( Integration )

  7. MongoDB Internal storage principle

Other :

  1. python3 operation MongoDB Various cases of

  2. MongoDB Command summary

One . Slice Overview

Fragmentation (sharding) Refer to : Split the data , The process of dispersing it in different machines , Sometimes you use partitions (partitioning) To express the concept . Spread the data across different machines , You don't need a powerful mainframe computer to store more data , You can meet MongoDB The demand for a huge increase in data volume .

When MongoDB When storing massive amounts of data , A machine may not be enough to store data , It may not be enough to provide acceptable read and write throughput . At this time , We can split data on multiple machines , Make database system can store and process more data .

Be careful
Replica set : Can solve automatic failover , Master slave copy , colony . Problem solved : Data redundancy backup , High availability of Architecture ; But it can't solve the problem of single node pressure ( Hardware limitations , Concurrent access pressure )

Why use shards :
1、 The local disk is not large enough
2、 When the request volume is large, there will be insufficient memory .
3、 Vertical expansion is expensive ( Memory 、 disk 、cpu)

Two . Fragment cluster structure

stay MongoDB Use the partition cluster structure to distribute :
[ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-vs4UlO1S-1657022797641)(evernotecid://B1CD39FE-B044-413D-A086-0649DB3F0070/appyinxiangcom/26430792/ENResource/p1225)]

In the figure above, there are three main components as follows :
Shard: Used to store actual data blocks , In a real production environment shard server A character can be made up of several machines replica set To undertake , Prevent single point failure of the main engine

Config Server:mongod example , Store the whole ClusterMetadata, These include chunk Information .

Query Routers: Front-end routing , The client is connected by this , And make the whole cluster look like a single database , Front end applications can be used transparently .

3、 ... and . Slice instance

The port distribution of slice structure is as follows :

Shard Server 127031
Shard Server 227032
Config Server: 27100
Route Process: 27777

Step one : start-up Shard Server

sudo rm -rf /MongoDB/shard/s1 /MongoDB/shard/s2 /MongoDB/shard/log
sudo mkdir -p /MongoDB/shard/s1 /MongoDB/shard/s2 /MongoDB/shard/log
sudo mongod --port 27031 --dbpath=/MongoDB/shard/s1
sudo mongod --port 27032 --dbpath=/MongoDB/shard/s2

Step two : start-up Config Server

sudo rm -rf /MongoDB/shard/config
sudo mkdir -p /MongoDB/shard/config
sudo mongod --port 27100 --dbpath=/MongoDB/shard/config

Be careful : Here we can start up like ordinary mongodb Start like a service , No need to add —shardsvr and configsvr Parameters . Because the function of these two parameters is to change the startup port , So we can specify the port by ourselves .

Step three : start-up Route Process

mongos --port 27777 --configdb 192.168.17.129:27100

Step four : To configure Sharding

Next , We use MongoDB Shell Log in to mongos, add to Shard node

mongo admin --port 27777
MongoDB shell version: 2.0.7
connecting to: 127.0.0.1:27777/admin
mongos> db.runCommand({
     addshard:"192.168.17.129:27031" })
{
     "shardAdded" : "shard0000", "ok" : 1 }
......
mongos> db.runCommand({
     addshard:"192.168.17.129:27032" })
{
     "shardAdded" : "shard0009", "ok" : 1 }

Step five : For a database test Enable sharding

# Set the database stored in slices 
mongos> db.runCommand({
     enablesharding:"test" }) 

{
     "ok" : 1 }

Step six : Yes collection Slice

mongos> db.runCommand({
     shardcollection: "test.mycol", key: {
     _id:1}})
{
     "collectionsharded" : "test.mycol", "ok" : 1 }

Step seven : test

mongo test --port 27777

Output 10000 Data

use testvar

num =10000
for (var i=0;i<num;i++){
     
    db.mycol.save({
    '_id':i}) 
}

No major changes are needed in the program code , Connect ordinary... Directly according to mongo Like the database , Connect the database to the interface 27777

Step eight : Check the fragmentation

When checking the segmentation , Must be in config( Configure execution on the server ) And it has to be in admin( Such as mongo 127.0.0.1:27100/admin) Execute under set

mongo admin --port 27100  #config( Configure execution on the server )
sh.status()

#  Output is as follows :
-- Sharding Status --- 
  sharding version: {
    
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("57cfcdfef06b33543fdeb52e")
}
  shards:
    {
      "_id" : "shard0000",  "host" : "localhost:27031" }
    {
      "_id" : "shard0001",  "host" : "localhost:27032" }
  active mongoses:
    "3.2.7" : 1
  balancer:
    Currently enabled:  yes
    Currently running:  no
    Failed balancer rounds in last 5 attempts:  0
    Migration Results for the last 24 hours: 
        1 : Success
  databases:
    {
      "_id" : "test",  "primary" : "shard0000",  "partitioned" : true }
        test.mycol
            shard key: {
     "_id" : 1 }
            unique: false
            balancing: true
            chunks:
                shard0000    2
                shard0001    1
            {
     "_id" : {
     "$minKey" : 1 } } -->> {
     "_id" : 1 } on : shard0001 Timestamp(2, 0) 
            {
     "_id" : 1 } -->> {
     "_id" : 57 } on : shard0000 Timestamp(2, 1) 
            {
     "_id" : 57 } -->> {
     "_id" : {
     "$maxKey" : 1 } } on : shard0000 Timestamp(1, 3)

Four . Hashed Sharding

The biggest advantage of choosing hash slice key is to ensure that the data is basically evenly distributed in each node , Use _id Do a simple test as a hash key :

mongo admin --port 27777
mongos> db.runCommand({
     shardcollection: "test.myhash", key: {
     _id:"hashed"}})
{
     "collectionsharded" : "test.myhash", "ok" : 1 }
use test
var num =10000
for (var i=0;i<num;i++){
    
    db.myhash.save({
    '_id':i})    
}

summary : Hash sharding hashes the provided slice keys into a very large long integer as the final slice key .

原网站

版权声明
本文为[cui_ yonghua]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/188/202207071117314398.html