当前位置:网站首页>Masterless replication system (1) - write DB when node fails
Masterless replication system (1) - write DB when node fails
2022-07-31 16:39:00 【Huawei cloud】
The idea of single-master and multi-master replication is that the client sends a write request to a master node, and the DB system is responsible for copying the write request to other replicas.The master node decides the write order, and the slave nodes apply the write logs sent by the master node in the same order.
Some data storage systems use a different design: abandoning the primary node, allowing any replica to accept writes directly from clients.The earliest replicated data systems were masterless (or called decentralized replication, centerless replication), but later in the era of relational database dominance, this idea was almost forgotten.After Amazon used it for its in-house Dynamo system[^vi], it was once again a popular DB architecture.Riak, Cassandra, and Voldemort are all open source data stores with a masterless replication model inspired by Dynamo, so such databases are also known as Dynamo style.
[^vi]: Dynamo is not available for users outside of Amazon.Confusingly, AWS offers a managed database product called DynamoDB that uses a completely different architecture: it's based on single-leader replication.
In some unowned implementations, the client sends write requests directly to multiple replicas, while in other implementations, there is a coordinator node that writes on behalf of the client, but unlike the master node's database,The coordinator is not responsible for maintaining write order.This design difference has profound implications for how DBs are used.
4.1 Write DB when node fails
Assuming a three-replica DB, one of which is currently unavailable, perhaps rebooting to install system updates.Under the primary node replication model, to continue processing writes, a failover needs to be performed.
Without a master model, there is no such switch.
Figure-10: Client (user 1234) sends write requests to three replicas in parallel, two available replicas accept writes, and the unavailable replica cannot handle it.Assuming two successful confirmations of the three copies, the user 1234 can consider the writing to be successful after receiving the two confirmation responses.The case where one of the replicas cannot be written can be completely ignored.
The failed node comes back online and clients start reading it.Any writes that occur during a node failure are not yet synchronized at that node, so reads may get stale data.
To solve this problem, when a client reads data from the DB, it does not send a request to 1 replica, but to multiple replicas in parallel.Clients may get different responses from different nodes, i.e. the latest value from one node and the old value from another node.The version number can be used to determine which value is updated.
4.1.1 Read Repair and Anti-Entropy
The replication model should ensure that all data is eventually replicated to all replicas.After a failed node comes back online, how does it catch up with missed writes?
Dynamo-style data storage system mechanism:
Read repair
When a client reads multiple copies in parallel, an expired return value can be detected.As shown in Figure-10, user 2345 gets version 6 from R3 and version 7 from replicas 1 and 2.The client can determine that replica 3 is the expired value, and then write the new value to that replica.Suitable for read-intensive scenarios
Anti-entropy process
Some data stores have background processes that constantly look for data differences between replicas, copying any missing data from one replica to another.Unlike replicated logs based on primary replication, this anti-entropy process does not guarantee any particular order of replicated writes and introduces significant synchronization lag
Not all systems implement either scheme.For example, Voldemort currently has no anti-entropy process.If there is no anti-entropy process, because [read repair] is only possible to perform repair when a read occurs, those rarely accessed data may be lost in some replicas and can no longer be detected, thus reducing the durability of writes.
边栏推荐
猜你喜欢
Huawei's top engineers lasted nine years "anecdotal stories network protocol" PDF document summary, is too strong
6-22 Vulnerability exploit - postgresql database password cracking
上传图片-微信小程序(那些年的坑记录2022.4)
宁波大学NBU IT项目管理期末考试知识点整理
T - sne + data visualization parts of the network parameters
二分查找的细节坑
[TypeScript]OOP
苹果官网样式调整 结账时产品图片“巨大化”
Implementing distributed locks based on Redis (SETNX), case: Solving oversold orders under high concurrency
使用互相关进行音频对齐
随机推荐
The new BMW 3 Series is on the market, with safety and comfort
基于Redis(SETNX)实现分布式锁,案例:解决高并发下的订单超卖,秒杀
牛客网刷题(四)
研发过程中的文档管理与工具
jeecg主从数据库读写分离配置「建议收藏」
jeecg master-slave database read-write separation configuration "recommended collection"
tensorflow2.0 cnn(layerwise)
使用互相关进行音频对齐
Implementing distributed locks based on Redis (SETNX), case: Solving oversold orders under high concurrency
server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none 失败
第二届中国PWA开发者日
网站漏洞修复服务商关于越权漏洞分析
Three aspects of Ali: How to solve the problem of MQ message loss, duplication and backlog?
基于C语言的编译器设计与实现
[pytorch] 1.7 pytorch and numpy, tensor and array conversion
Mariabackup实现Mariadb 10.3的增量数据备份
ansible学习笔记02
Stuck in sill idealTree buildDeps during npm installation, npm installation is slow, npm installation is stuck in one place
牛客 HJ16 购物单
牛客网刷题(三)