当前位置:网站首页>Redis explores cache consistency
Redis explores cache consistency
2022-07-01 12:46:00 【Bronze God】
What is consistency ?
Consistency refers to requests at the same time , Whether the data in the cache is the same as the data in the database .
Strong consistency : Database update operation and cache update operation are atomic , Cache and database data are consistent at any time , This is the hardest consistency to achieve .
Weak consistency : When the data is updated , The data in the cache may be the value before update , It may also be the updated value , Because this update is asynchronous .
Final consistency : A special weak consistency , After a certain period of time , The data will reach a consistent state . Final consistency is the ideal state of weak consistency , It is also highly recommended in the data consistency solution of distributed systems .
Update strategy
Our updated strategy corresponds to 4 Kind of :
(1) Update cache first , Update the database .
(2) Update the database first , Update the cache again .
(3) So let's delete the cache , Update the database .
(4) Update the database first , Delete the cache .
Update cache or delete cache ?
Update cache
advantage : The data in the cache will always be valid , Relative delete cache , It will improve the cache hit rate .
shortcoming : Updating the cache frequently costs more , The business scenario of writing more and reading less is not applicable .
Delete cache
advantage : The logic is simple , Just delete the data in the cache .
shortcoming : Deleting the cache will cause the next request cache miss .
Considering the time cost and code difficulty of updating the cache , A more economical way to delete the cache .
Cache first or database first ?
Let's put 4 Draw all the scenes that will be encountered in each implementation
Update the cache before updating the database - Double write scene

① Threads B Update cache
② Threads A Update cache
③ Threads A Update the database
④ Threads B Update the database
In this scenario of double writing , If the update operation of another thread is overwritten between two operations of one thread , There will inevitably be consistency problems . Because if threads A Want fields +1, Threads B Also want fields +1, After independent calculation of two threads , Two threads calculate +1 Result , Then the thread A Update the cache to +1 Result , The database is inconsistent with the cache .
Update the cache before updating the database - Reading and writing scenes

① Threads B The query cache , But cache misses
② Threads B Query the database
③ Threads A Update cache
④ Threads A Update the database
⑤ Threads B Find the data , Update cache
This situation also leads to consistency problems . This consistency problem is well understood , Threads B Query request received , Go to the cache to find the results , But cache misses , Check the database . Threads A Update the cache first , And update the database . Reverse the thread B To update the cache , At this time, there are threads in the cache A Update previous results , So the data is inconsistent . But because the operation cache is faster than the operation database , So the probability of this situation is low .
Delete the cache before updating the database - Double write scene

① Threads B Delete cache
② Threads A Delete cache
③ Threads A Update the database
④ Threads B Update the database
In the double write scenario with updating the cache first , Facing the same situation , But deleting the cache first obviously won't suffer from consistency problems , Because no matter what order cache is, it will be deleted .
Delete the cache before updating the database - Reading and writing scenes

① Threads B The query cache , Not hit
② Threads B Query the database
③ Threads A Delete cache
④ Threads A Update the database
⑤ Threads B Update cache
This situation does not guarantee consistency . Threads B Query request received , Go to the cache to find the results , But cache misses , Check the database . Threads A Delete the cache first , And update the database . Reverse the thread B To update the cache , At this time, there are threads in the cache A Update previous results , So the data is inconsistent .
First update the database and then update the cache - Double write scene

① Threads B Update the database
② Threads A Update the database
③ Threads A Update cache
④ Threads B Update cache
This situation does not guarantee consistency . Threads B First update the database to get the results , Threads A Then update the database and update the cache , Threads B Updating the cache again will inevitably lead to cache inconsistency .
First update the database and then update the cache - Reading and writing scenes

① Threads B The query cache , Not hit
② Threads B Query the database
③ Threads A Update the database
④ Threads A Update cache
⑤ Threads B Update cache
Consistency cannot be guaranteed in this case . Threads B Query request received , Go to the cache to find the results , But cache misses , Check the database . Threads A Update the cache first , And update the database . Reverse the thread B To update the cache , At this time, there are threads in the cache A Update previous results , So the data is inconsistent . But because the operation cache is faster than the operation database , So the probability of this situation is low .
First update the database and then delete the cache - Double write scene

① Threads B Update the database
② Threads A Update the database
③ Threads A Delete cache
④ Threads B Delete cache
This situation can ensure consistency . Because the double write scenario of deleting the cache does not care whether you delete it first or later , Because there is no difference , So it must be consistent .
First update the database and then delete the cache - Reading and writing scenes

① Threads B The query cache , Not hit
② Threads B Query the database
③ Threads A Update the database
④ Threads A Delete cache
⑤ Threads B Update cache
Consistency cannot be guaranteed in this case . Threads B Query request received , Go to the cache to find the results , But cache misses , Check the database . Threads A Update the database first , Then delete the cache . Reverse the thread B To update the cache , At this time, there are threads in the cache A Update previous results , So the data is inconsistent . But because the operation cache is faster than the operation database , So the probability of this situation is low .
wait ~ Autumn bean sack , Why can't the four strategies guarantee consistency . What are you doing ?
Don't worry. , Let's review these four situations :
1. Update the cache before updating the database : In the double write scenario , It is easy to have consistency problems , In the reading and writing scenario , Consistency problems occur with small probability , therefore Pass.
2. Delete the cache before updating the database : In the double write scenario , There will be no consistency problems , In the reading and writing scenario , It is easy to have consistency problems , therefore Pass.
3. First update the database and then update the cache : In the double write scenario , It is easy to have consistency problems , In the reading and writing scenario , Consistency problems occur with small probability , therefore Pass.
4. First update the database and then delete the cache : In the double write scenario , There will be no consistency problems , In the reading and writing scenario , Consistency problems occur with small probability , So keep it for the time being .
Three of the four strategies have been Pass 了 , There is only one buddy left , But it still can't completely meet the consistency , So we need to add some technology .

Delay double delete

We first know the process of delayed double deletion , Let's explain the process .
Why delete the cache first ?
First solve the problem of updating the database , The query request goes to the cache , Getting outdated data .
Why does the second deletion need to be delayed ?
To avoid threads A The second delete cache operation of , Before thread B Update cache operation , Thus, there are still inconsistencies .
How to define this delay , It is appropriate to define how long ?
This can only be evaluated according to the query performance of the system , There is no universal value .
in addition , Delayed double deletion can only guarantee the final consistency , There is no guarantee of strong consistency .
边栏推荐
猜你喜欢

VS Code 设置代码自动保存

Manage nodejs with NVM (downgrade the high version to the low version)

Nc100 converts strings to integers (ATOI)

Mobile note application

Logstash error: cannot reload pipeline, because the existing pipeline is not reloadable

Wang Xing's infinite game ushers in the "ultimate" battle
![leetcode:226. Flip binary tree [DFS flip]](/img/b8/6c5596ac30de59f0f347bb0bddf574.png)
leetcode:226. Flip binary tree [DFS flip]

be based on. NETCORE development blog project starblog - (13) add friendship link function

Ikvm of toolbox Net project new progress

Double linked list related operations
随机推荐
Router.use() requires a middleware function but got a Object
CS5268优势替代AG9321MCQ Typec多合一扩展坞方案
数论基础及其代码实现
微信模拟地理位置_伪装微信地理位置
Quickly understand what the compressed list in redis is
GID: open vision proposes a comprehensive detection model knowledge distillation | CVPR 2021
Sort out relevant contents of ansible
Understanding of NAND flash deblocking
Interpretation of hard threshold function [easy to understand]
Tencent security and KPMG released a regulatory technology white paper to analyze the "3+3" hot application scenarios
网络socket的状态要怎么统计?
微信小程序 – 80个实用的微信小程序项目实例
leetcode:226. Flip binary tree [DFS flip]
R语言基于h2o包构建二分类模型:使用h2o.gbm构建梯度提升机模型GBM、使用h2o.auc计算模型的AUC值
请问flink mysql cdc 全量读取mysql某个表数据,对原始的mysql数据库有影响吗
"Analysis of 43 cases of MATLAB neural network": Chapter 40 research on prediction of dynamic neural network time series -- implementation of NARX based on MATLAB
flinkcdc要实时抽取oracle,对oracle要配置什么东西?
Question d'entrevue de Huawei: recrutement
Manage nodejs with NVM (downgrade the high version to the low version)
基因检测,如何帮助患者对抗疾病?