当前位置:网站首页>2022-01-27 research on the minimum number of redis partitions
2022-01-27 research on the minimum number of redis partitions
2022-07-03 12:54:00 【a tracer】
Abstract :
In order to surpass Alibaba cloud , Become the strongest distributed in China redis Cloud database , Yes redis Discuss the minimum number of clusters .
Analyze the problems caused by reducing the number of clusters .
cluster colony - Modify creation only cluster At the time of the master Quantitative restriction :
The minimum need is written in the code 3 individual master node , Modify the source code directly to support smaller shards
int node_len = cluster_manager.nodes->len; int replicas = config.cluster_manager_command.replicas; int masters_count = CLUSTER_MANAGER_MASTERS_COUNT(node_len, replicas); if (masters_count < 3) { clusterManagerLogErr( "*** ERROR: Invalid configuration for cluster creation.\n" "*** Redis Cluster requires at least 3 master nodes.\n" "*** This is not possible with %d nodes and %d replicas per node.", node_len, replicas); clusterManagerLogErr("\n*** At least %d nodes are required.\n", 3 * (replicas + 1)); return 0; } |
cluster colony , 2 Fragmentation
Modify the source code directly , Make the smallest master The number of nodes is 2.
int node_len = cluster_manager.nodes->len; int replicas = config.cluster_manager_command.replicas; int masters_count = CLUSTER_MANAGER_MASTERS_COUNT(node_len, replicas); if (masters_count < 2) { clusterManagerLogErr( "*** ERROR: Invalid configuration for cluster creation.\n" "*** Redis Cluster requires at least 2 master nodes.\n" "*** This is not possible with %d nodes and %d replicas per node.", node_len, replicas); clusterManagerLogErr("\n*** At least %d nodes are required.\n", 2 * (replicas + 1)); return 0; } |
cluster Cluster created successfully
>>> Performing hash slots allocation on 4 nodes... Master[0] -> Slots 0 - 8191 Master[1] -> Slots 8192 - 16383 Adding replica 192.168.209.128:7003 to 192.168.209.128:7000 Adding replica 192.168.209.128:7002 to 192.168.209.128:7001 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master M: 61b6f4d7d863b89033e3d1b2b58c791d45508da9 192.168.209.128:7000 slots:[0-8191] (8192 slots) master M: 064ed2aadaab08337948dc5741044e3b6bdfae7e 192.168.209.128:7001 slots:[8192-16383] (8192 slots) master S: 1e8c9092b4f5554c5f85d0c3e34693891eaf2287 192.168.209.128:7002 replicates 064ed2aadaab08337948dc5741044e3b6bdfae7e S: dd8423705b7c3a71f34b78ec0c4c625c5cf7876e 192.168.209.128:7003 replicates 61b6f4d7d863b89033e3d1b2b58c791d45508da9 >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join >>> Performing Cluster Check (using node 192.168.209.128:7000) M: 61b6f4d7d863b89033e3d1b2b58c791d45508da9 192.168.209.128:7000 slots:[0-8191] (8192 slots) master 1 additional replica(s) M: 064ed2aadaab08337948dc5741044e3b6bdfae7e 192.168.209.128:7001 slots:[8192-16383] (8192 slots) master 1 additional replica(s) S: 1e8c9092b4f5554c5f85d0c3e34693891eaf2287 192.168.209.128:7002 slots: (0 slots) slave replicates 064ed2aadaab08337948dc5741044e3b6bdfae7e S: dd8423705b7c3a71f34b78ec0c4c625c5cf7876e 192.168.209.128:7003 slots: (0 slots) slave replicates 61b6f4d7d863b89033e3d1b2b58c791d45508da9 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. |
[email protected]:~/work/redis/cluster# redis-cli -p 7001 cluster nodes 1e8c9092b4f5554c5f85d0c3e34693891eaf2287 192.168.209.128:[email protected] slave 064ed2aadaab08337948dc5741044e3b6bdfae7e 0 1625474320603 2 connected 064ed2aadaab08337948dc5741044e3b6bdfae7e 192.168.209.128:[email protected] myself,master - 0 1625474320000 2 connected 8192-16383 dd8423705b7c3a71f34b78ec0c4c625c5cf7876e 192.168.209.128:[email protected] slave 61b6f4d7d863b89033e3d1b2b58c791d45508da9 0 1625474321616 4 connected 61b6f4d7d863b89033e3d1b2b58c791d45508da9 192.168.209.128:[email protected] master - 1625474312956 1625474312036 1 disconnected 0-8191 |
however failover When will there be a vote master No more than half of the nodes , As a result, the vote cannot be held
1e8c9092b4f5554c5f85d0c3e34693891eaf2287 192.168.209.128:[email protected] slave 064ed2aadaab08337948dc5741044e3b6bdfae7e 0 1625474654684 2 connected 064ed2aadaab08337948dc5741044e3b6bdfae7e 192.168.209.128:[email protected] myself,master - 0 1625474653000 2 connected 8192-16383 dd8423705b7c3a71f34b78ec0c4c625c5cf7876e 192.168.209.128:[email protected] slave 61b6f4d7d863b89033e3d1b2b58c791d45508da9 0 1625474654576 4 connected 61b6f4d7d863b89033e3d1b2b58c791d45508da9 192.168.209.128:[email protected] master,fail? - 1625474312956 1625474312036 1 disconnected 0-8191 |
cluster colony , 1 Fragmentation
You can cluster components
>>> Performing hash slots allocation on 2 nodes... Master[0] -> Slots 0 - 16383 Adding replica 192.168.209.128:7001 to 192.168.209.128:7000 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master M: 744e19b475557d1f17ca2fa65f68845a4fd55da9 192.168.209.128:7000 slots:[0-16383] (16384 slots) master S: f0b6975f53c285174f71bf0ccfa72b8f48ceaba3 192.168.209.128:7001 replicates 744e19b475557d1f17ca2fa65f68845a4fd55da9 >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join >>> Performing Cluster Check (using node 192.168.209.128:7000) M: 744e19b475557d1f17ca2fa65f68845a4fd55da9 192.168.209.128:7000 slots:[0-16383] (16384 slots) master 1 additional replica(s) S: f0b6975f53c285174f71bf0ccfa72b8f48ceaba3 192.168.209.128:7001 slots: (0 slots) slave replicates 744e19b475557d1f17ca2fa65f68845a4fd55da9 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. |
[email protected]:~/work/redis/cluster# redis-cli -p 7001 cluster nodes 744e19b475557d1f17ca2fa65f68845a4fd55da9 192.168.209.128:[email protected] master - 1625474846789 1625474846585 1 disconnected 0-16383 f0b6975f53c285174f71bf0ccfa72b8f48ceaba3 192.168.209.128:[email protected] myself,slave 744e19b475557d1f17ca2fa65f68845a4fd55da9 0 0 2 connected |
But it can't be done failover
[email protected]:~/work/redis/cluster# redis-cli -p 7001 cluster nodes 744e19b475557d1f17ca2fa65f68845a4fd55da9 192.168.209.128:[email protected] master,fail? - 1625474846789 1625474846585 1 disconnected 0-16383 f0b6975f53c285174f71bf0ccfa72b8f48ceaba3 192.168.209.128:[email protected] myself,slave 744e19b475557d1f17ca2fa65f68845a4fd55da9 0 0 2 connected |
Modify creation only cluster Summary of cluster time constraints :
Create by modifying cluster Cluster source code can reduce fragmentation , Not taking over faiover Module , It cannot provide functions normally .
Scheme 1 : redis-cluster colony - Minimum support 1 individual master Of cluster colony - Complete modification code
Through the above analysis . If minimum 1 individual master Of cluster colony , The following changes need to be made :
- establish cluster When clustering master Quantitative restriction
- Judge nodes fail when , Meet the limit on the number of votes
- 2 individual master Node cluster colony , Can only be 1 individual
- There must be at least 1 individual master The node is normal , Clusters are considered normal .
- If 2 individual master Nodes are down , Then all in the cluster master All down , Unable to provide services
- 1 individual master Node cluster colony , Can only be 0 individual
- Once judged pfail, Immediately set to fail
- If one master and one slave master and slave All down , Then the cluster will be down , Unable to provide services
- 2 individual master Node cluster colony , Can only be 1 individual
- From the node failover when , Meet the voting limit
- 2 individual master Node cluster colony , Can only be 1 individual
- There must be at least 1 individual master The node is normal , Clusters are considered normal .
- If 2 individual master Nodes are down , Then all in the cluster master All down , Unable to provide services
- 1 individual master Node cluster colony , Can only be 0 individual
- Once judged pfail, Immediately set to fail
- If one master and one slave master and slave All down , Then the cluster will be down , Unable to provide services
- 2 individual master Node cluster colony , Can only be 1 individual
Code tuning :
clusterManagerCommandCreate Modify to create cluster The smallest master Limit , Will be the smallest 3 individual master It is amended as follows 1 individual
int node_len = cluster_manager.nodes->len; int replicas = config.cluster_manager_command.replicas; int masters_count = CLUSTER_MANAGER_MASTERS_COUNT(node_len, replicas); if (masters_count < 1) { clusterManagerLogErr( "*** ERROR: Invalid configuration for cluster creation.\n" "*** Redis Cluster requires at least 1 master nodes.\n" "*** This is not possible with %d nodes and %d replicas per node.", node_len, replicas); clusterManagerLogErr("\n*** At least %d nodes are required.\n", 1 * (replicas + 1)); return 0; } |
clusterHandleSlaveFailover Judge from the node whether failover The number of votes to raise themselves to master
if (server.cluster->size < 3) { needed_quorum -=1; } |
clusterCron If it is judged that the node times out , When master Number of nodes < 3 when , Set directly to FAIL, instead of possiable fail
if (delay > server.cluster_node_timeout) { /* Timeout reached. Set the node as possibly failing if it is * not already in this state. */ if (!(node->flags & (CLUSTER_NODE_PFAIL|CLUSTER_NODE_FAIL))) { serverLog(LL_DEBUG,"*** NODE %.40s possibly failing", node->name); if (server.cluster->size < 3) { node->flags |= CLUSTER_NODE_FAIL; } else { node->flags |= CLUSTER_NODE_PFAIL; } update_state = 1; } } |
test result :
2 Fragmentation
[email protected]:~/work/redis/cluster# redis-cli -p 7001 cluster nodes 936f19d1be23c11f3072c75a558a6cab674250d4 192.168.209.128:[email protected] master - 0 1625477918714 1 connected 0-8191 b67dbe1d90d7415b41d62052d1f439cb72ba62d4 192.168.209.128:[email protected] slave 936f19d1be23c11f3072c75a558a6cab674250d4 0 1625477919237 4 connected bbe94f4bfc4471afa39dfd392245a8483d5c6f1e 192.168.209.128:[email protected] slave 62b189326aaebc2b8556febb9dbca4f90875def7 0 1625477918000 3 connected 62b189326aaebc2b8556febb9dbca4f90875def7 192.168.209.128:[email protected] myself,master - 0 1625477918000 2 connected 8192-16383 [email protected]:~/work/redis/cluster# [email protected]:~/work/redis/cluster# [email protected]:~/work/redis/cluster# ps -ef | > ^C [email protected]:~/work/redis/cluster# ps -ef | grep redis | grep 7000 root 110971 1 0 05:38 pts/0 00:00:00 redis-server *:7000 [cluster] [email protected]:~/work/redis/cluster# kill 110971 [email protected]:~/work/redis/cluster# redis-cli -p 7001 cluster nodes 936f19d1be23c11f3072c75a558a6cab674250d4 192.168.209.128:[email protected] master - 1625477937263 1625477937058 1 disconnected 0-8191 b67dbe1d90d7415b41d62052d1f439cb72ba62d4 192.168.209.128:[email protected] slave 936f19d1be23c11f3072c75a558a6cab674250d4 0 1625477939090 4 connected bbe94f4bfc4471afa39dfd392245a8483d5c6f1e 192.168.209.128:[email protected] slave 62b189326aaebc2b8556febb9dbca4f90875def7 0 1625477938000 3 connected 62b189326aaebc2b8556febb9dbca4f90875def7 192.168.209.128:[email protected] myself,master - 0 1625477937000 2 connected 8192-16383 [email protected]:~/work/redis/cluster# redis-cli -p 7001 cluster nodes 936f19d1be23c11f3072c75a558a6cab674250d4 192.168.209.128:[email protected] master,fail - 1625477937263 1625477937058 1 disconnected b67dbe1d90d7415b41d62052d1f439cb72ba62d4 192.168.209.128:[email protected] master - 0 1625477941627 5 connected 0-8191 bbe94f4bfc4471afa39dfd392245a8483d5c6f1e 192.168.209.128:[email protected] slave 62b189326aaebc2b8556febb9dbca4f90875def7 0 1625477941119 3 connected 62b189326aaebc2b8556febb9dbca4f90875def7 192.168.209.128:[email protected] myself,master - 0 1625477940000 2 connected 8192-16383 |
1 Fragmentation
[email protected]:~/work/redis/cluster# bash create.sh >>> Performing hash slots allocation on 2 nodes... Master[0] -> Slots 0 - 16383 Adding replica 192.168.209.128:7001 to 192.168.209.128:7000 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master M: 55c6c900e6b8cf18b017449284ce6f27f74f5883 192.168.209.128:7000 slots:[0-16383] (16384 slots) master S: 494cec9731c8c09d9821a4bc26b2498c0ff6a286 192.168.209.128:7001 replicates 55c6c900e6b8cf18b017449284ce6f27f74f5883 >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join >>> Performing Cluster Check (using node 192.168.209.128:7000) M: 55c6c900e6b8cf18b017449284ce6f27f74f5883 192.168.209.128:7000 slots:[0-16383] (16384 slots) master 1 additional replica(s) S: 494cec9731c8c09d9821a4bc26b2498c0ff6a286 192.168.209.128:7001 slots: (0 slots) slave replicates 55c6c900e6b8cf18b017449284ce6f27f74f5883 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. 0 [email protected]:~/work/redis/cluster# redis-cli -p 7001 cluster nodes 55c6c900e6b8cf18b017449284ce6f27f74f5883 192.168.209.128:[email protected] master - 0 1625478602937 1 connected 0-16383 494cec9731c8c09d9821a4bc26b2498c0ff6a286 192.168.209.128:[email protected] myself,slave 55c6c900e6b8cf18b017449284ce6f27f74f5883 0 0 2 connected [email protected]:~/work/redis/cluster# [email protected]:~/work/redis/cluster# redis-cli -p 7001 cluster failover OK [email protected]:~/work/redis/cluster# redis-cli -p 7001 cluster nodes 55c6c900e6b8cf18b017449284ce6f27f74f5883 192.168.209.128:[email protected] slave 494cec9731c8c09d9821a4bc26b2498c0ff6a286 0 1625478611234 3 connected 494cec9731c8c09d9821a4bc26b2498c0ff6a286 192.168.209.128:[email protected] myself,master - 0 0 3 connected 0-16383 [email protected]:~/work/redis/cluster# ps -ef | grep redis | grep 7001 root 111274 1 0 05:49 pts/0 00:00:00 redis-server *:7001 [cluster] [email protected]:~/work/redis/cluster# kill 111274 [email protected]:~/work/redis/cluster# redis-cli -p 7001 cluster nodes Could not connect to Redis at 127.0.0.1:7001: Connection refused [email protected]:~/work/redis/cluster# redis-cli -p 7000 cluster nodes 55c6c900e6b8cf18b017449284ce6f27f74f5883 192.168.209.128:[email protected] myself,master - 0 0 4 connected 0-16383 494cec9731c8c09d9821a4bc26b2498c0ff6a286 192.168.209.128:[email protected] master,fail - 1625478634680 1625478633659 3 disconnected [email protected]:~/work/redis/cluster# redis-cli -p 7000 cluster nodes 55c6c900e6b8cf18b017449284ce6f27f74f5883 192.168.209.128:[email protected] myself,master - 0 0 4 connected 0-16383 494cec9731c8c09d9821a4bc26b2498c0ff6a286 192.168.209.128:[email protected] master,fail - 1625478634680 1625478633659 3 disconnected [email protected]:~/work/redis/cluster# cd 7001 [email protected]:~/work/redis/cluster/7001# redis-server ./redis.conf & [1] 111347 [email protected]:~/work/redis/cluster/7001# cd .. [email protected]:~/work/redis/cluster# redis-cli -p 7000 cluster nodes 55c6c900e6b8cf18b017449284ce6f27f74f5883 192.168.209.128:[email protected] myself,master - 0 0 4 connected 0-16383 494cec9731c8c09d9821a4bc26b2498c0ff6a286 192.168.209.128:[email protected] slave 55c6c900e6b8cf18b017449284ce6f27f74f5883 0 1625478663466 4 connected |
After the above modifications, it can be normal failover
Option two : Don't use redis-cluser Clustered faiover modular , Instead, use a model similar to the independent sentinel , By independent third-party services failover
The option to be considered is to use zookeeper, And use zkfc Thought , But the availability of deployment needs further discussion
边栏推荐
- C graphical tutorial (Fourth Edition)_ Chapter 13 entrustment: what is entrustment? P238
- context. Getexternalfilesdir() is compared with the returned path
- C graphical tutorial (Fourth Edition)_ Chapter 20 asynchronous programming: examples - using asynchronous
- Pytext training times error: typeerror:__ init__ () got an unexpected keyword argument 'serialized_ options'
- [combinatorics] permutation and combination (the combination number of multiple sets | the repetition of all elements is greater than the combination number | the derivation of the combination number
- How to stand out quickly when you are new to the workplace?
- 【習題七】【數據庫原理】
- Application of ncnn Neural Network Computing Framework in Orange Pi 3 Lts Development Board
- Quick learning 1.8 front and rear interfaces
- [exercice 7] [principe de la base de données]
猜你喜欢
电压环对 PFC 系统性能影响分析
Cache penetration and bloom filter
Application of ncnn neural network computing framework in orange school orangepi 3 lts development board
[network counting] Chapter 3 data link layer (2) flow control and reliable transmission, stop waiting protocol, backward n frame protocol (GBN), selective retransmission protocol (SR)
The upward and downward transformation of polymorphism
01 three solutions to knapsack problem (greedy dynamic programming branch gauge)
【R】【密度聚类、层次聚类、期望最大化聚类】
The latest version of lottery blind box operation version
Xctf mobile--app3 problem solving
【综合题】【数据库原理】
随机推荐
高效能人士的七个习惯
initial、inherit、unset、revert和all的区别
context. Getexternalfilesdir() is compared with the returned path
Node.js: express + MySQL的使用
CNN MNIST handwriting recognition
【计网】第三章 数据链路层(2)流量控制与可靠传输、停止等待协议、后退N帧协议(GBN)、选择重传协议(SR)
【数据库原理复习题】
【数据库原理及应用教程(第4版|微课版)陈志泊】【SQLServer2012综合练习】
Huffman coding experiment report
Kung Fu pays off, and learning is done
RedHat5 安装Socket5代理服务器
How to stand out quickly when you are new to the workplace?
Do you feel like you've learned something and forgotten it?
(latest version) WiFi distribution multi format + installation framework
Nodejs+express+mysql realizes login function (including verification code)
Tensorflow binary installation & Failure
社交社区论坛APP超高颜值UI界面
[exercise 6] [Database Principle]
如何在微信小程序中获取用户位置?
Nodejs+Express+MySQL实现登陆功能(含验证码)