当前位置:网站首页>Mongodb slice summary
Mongodb slice summary
2022-07-07 13:12:00 【cui_ yonghua】
The basic chapter ( Can solve the problem of 80% The problem of ):
MongoDB data type 、 Key concepts and shell Commonly used instructions
MongoDB Various additions to documents 、 to update 、 Delete operation summary
Advanced :
Other :
One . Slice Overview
Fragmentation (sharding)
Refer to : Split the data , The process of dispersing it in different machines , Sometimes you use partitions (partitioning) To express the concept . Spread the data across different machines , You don't need a powerful mainframe computer to store more data , You can meet MongoDB The demand for a huge increase in data volume .
When MongoDB When storing massive amounts of data , A machine may not be enough to store data , It may not be enough to provide acceptable read and write throughput . At this time , We can split data on multiple machines , Make database system can store and process more data .
Be careful :
Replica set : Can solve automatic failover , Master slave copy , colony . Problem solved : Data redundancy backup , High availability of Architecture ; But it can't solve the problem of single node pressure ( Hardware limitations , Concurrent access pressure )
Why use shards :
1、 The local disk is not large enough
2、 When the request volume is large, there will be insufficient memory .
3、 Vertical expansion is expensive ( Memory 、 disk 、cpu)
Two . Fragment cluster structure
stay MongoDB Use the partition cluster structure to distribute :
[ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-vs4UlO1S-1657022797641)(evernotecid://B1CD39FE-B044-413D-A086-0649DB3F0070/appyinxiangcom/26430792/ENResource/p1225)]
In the figure above, there are three main components as follows :Shard
: Used to store actual data blocks , In a real production environment shard server A character can be made up of several machines replica set To undertake , Prevent single point failure of the main engine
Config Server
:mongod example , Store the whole ClusterMetadata, These include chunk Information .
Query Routers
: Front-end routing , The client is connected by this , And make the whole cluster look like a single database , Front end applications can be used transparently .
3、 ... and . Slice instance
The port distribution of slice structure is as follows :
Shard Server 1:27031
Shard Server 2:27032
Config Server: 27100
Route Process: 27777
Step one : start-up Shard Server
sudo rm -rf /MongoDB/shard/s1 /MongoDB/shard/s2 /MongoDB/shard/log
sudo mkdir -p /MongoDB/shard/s1 /MongoDB/shard/s2 /MongoDB/shard/log
sudo mongod --port 27031 --dbpath=/MongoDB/shard/s1
sudo mongod --port 27032 --dbpath=/MongoDB/shard/s2
Step two : start-up Config Server
sudo rm -rf /MongoDB/shard/config
sudo mkdir -p /MongoDB/shard/config
sudo mongod --port 27100 --dbpath=/MongoDB/shard/config
Be careful : Here we can start up like ordinary mongodb Start like a service , No need to add —shardsvr and configsvr Parameters . Because the function of these two parameters is to change the startup port , So we can specify the port by ourselves .
Step three : start-up Route Process
mongos --port 27777 --configdb 192.168.17.129:27100
Step four : To configure Sharding
Next , We use MongoDB Shell Log in to mongos, add to Shard node
mongo admin --port 27777
MongoDB shell version: 2.0.7
connecting to: 127.0.0.1:27777/admin
mongos> db.runCommand({
addshard:"192.168.17.129:27031" })
{
"shardAdded" : "shard0000", "ok" : 1 }
......
mongos> db.runCommand({
addshard:"192.168.17.129:27032" })
{
"shardAdded" : "shard0009", "ok" : 1 }
Step five : For a database test Enable sharding
# Set the database stored in slices
mongos> db.runCommand({
enablesharding:"test" })
{
"ok" : 1 }
Step six : Yes collection Slice
mongos> db.runCommand({
shardcollection: "test.mycol", key: {
_id:1}})
{
"collectionsharded" : "test.mycol", "ok" : 1 }
Step seven : test
mongo test --port 27777
Output 10000 Data
use testvar
num =10000
for (var i=0;i<num;i++){
db.mycol.save({
'_id':i})
}
No major changes are needed in the program code , Connect ordinary... Directly according to mongo Like the database , Connect the database to the interface 27777
Step eight : Check the fragmentation
When checking the segmentation , Must be in config( Configure execution on the server ) And it has to be in admin( Such as mongo 127.0.0.1:27100/admin) Execute under set
mongo admin --port 27100 #config( Configure execution on the server )
sh.status()
# Output is as follows :
-- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("57cfcdfef06b33543fdeb52e")
}
shards:
{
"_id" : "shard0000", "host" : "localhost:27031" }
{
"_id" : "shard0001", "host" : "localhost:27032" }
active mongoses:
"3.2.7" : 1
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
1 : Success
databases:
{
"_id" : "test", "primary" : "shard0000", "partitioned" : true }
test.mycol
shard key: {
"_id" : 1 }
unique: false
balancing: true
chunks:
shard0000 2
shard0001 1
{
"_id" : {
"$minKey" : 1 } } -->> {
"_id" : 1 } on : shard0001 Timestamp(2, 0)
{
"_id" : 1 } -->> {
"_id" : 57 } on : shard0000 Timestamp(2, 1)
{
"_id" : 57 } -->> {
"_id" : {
"$maxKey" : 1 } } on : shard0000 Timestamp(1, 3)
Four . Hashed Sharding
The biggest advantage of choosing hash slice key is to ensure that the data is basically evenly distributed in each node , Use _id Do a simple test as a hash key :
mongo admin --port 27777
mongos> db.runCommand({
shardcollection: "test.myhash", key: {
_id:"hashed"}})
{
"collectionsharded" : "test.myhash", "ok" : 1 }
use test
var num =10000
for (var i=0;i<num;i++){
db.myhash.save({
'_id':i})
}
summary : Hash sharding hashes the provided slice keys into a very large long integer as the final slice key .
边栏推荐
- Sequoia China completed the new phase of $9billion fund raising
- How did Guotai Junan Securities open an account? Is it safe to open an account?
- 【学习笔记】zkw 线段树
- JS中为什么基础数据类型可以调用方法
- Unity build error: the name "editorutility" does not exist in the current context
- Cloud detection 2020: self attention generation countermeasure network for cloud detection in high-resolution remote sensing images
- Analysis of DHCP dynamic host setting protocol
- [untitled]
- Ogre入门尝鲜
- Query whether a field has an index with MySQL
猜你喜欢
自定义线程池拒绝策略
人均瑞数系列,瑞数 4 代 JS 逆向分析
分布式事务解决方案
Cloud detection 2020: self attention generation countermeasure network for cloud detection in high-resolution remote sensing images
centso7 openssl 报错Verify return code: 20 (unable to get local issuer certificate)
迅为iTOP-IMX6ULL开发板Pinctrl和GPIO子系统实验-修改设备树文件
云检测2020:用于高分辨率遥感图像中云检测的自注意力生成对抗网络Self-Attentive Generative Adversarial Network for Cloud Detection
【无标题】
Sequoia China completed the new phase of $9billion fund raising
Sed of three swordsmen in text processing
随机推荐
Smart cloud health listed: with a market value of HK $15billion, SIG Jingwei and Jingxin fund are shareholders
聊聊伪共享
【等保】云计算安全扩展要求关注的安全目标和实现方式区分原则有哪些?
【Presto Profile系列】Timeline使用
抓细抓实抓好安全生产各项工作 全力确保人民群众生命财产安全
Cloud detection 2020: self attention generation countermeasure network for cloud detection in high-resolution remote sensing images
《ASP.NET Core 6框架揭秘》样章[200页/5章]
Unity 构建错误:当前上下文中不存在名称“EditorUtility”
MATLAB中polarscatter函数使用
共创软硬件协同生态:Graphcore IPU与百度飞桨的“联合提交”亮相MLPerf
飞桨EasyDL实操范例:工业零件划痕自动识别
解决缓存击穿问题
分屏bug 小记
通过Keil如何查看MCU的RAM与ROM使用情况
HZOJ #236. Recursive implementation of combinatorial enumeration
基于鲲鹏原生安全,打造安全可信的计算平台
记一次 .NET 某新能源系统 线程疯涨 分析
Aosikang biological sprint scientific innovation board of Hillhouse Investment: annual revenue of 450million yuan, lost cooperation with kangxinuo
关于 appium 启动 app 后闪退的问题 - (已解决)
为租客提供帮助