当前位置:网站首页>21_ Redis_ Analysis of redis cache penetration and avalanche
21_ Redis_ Analysis of redis cache penetration and avalanche
2022-07-02 15:13:00 【Listen to the rain】
Why understand cache penetration and avalanche : Ensure high availability of services
Redis The use of caching , Greatly improve the performance and efficiency of the application , Especially in data query . But at the same time , It also brings some problems . among , The most important question , It's data consistency , Strictly speaking , There is no solution to this problem . If there is a high requirement for data consistency , Then you can't use caching .
Other typical problems are , Cache penetration 、 Cache avalanche and cache breakdown . at present , There are also popular solutions in the industry .
Cache penetration ( We can't find it )
The concept of cache penetration is simple , The user wants to query a data , Find out redis Memory databases don't have , That is, cache miss , So query the persistence layer database . There is no , So this query failed . When there are a lot of users , No cache hits , So they all requested the persistence layer database, which will put a lot of pressure on the persistence layer database , This is equivalent to cache penetration .
Solution
The bloon filter
A bloom filter is a data structure , For all possible query parameters, use hash stored , Check at the control level first , If not, discard , Thus, the query pressure on the underlying storage system is avoided ;

Caching empty objects
When the storage tier misses , Even empty objects returned are cached , At the same time, an expiration time will be set , Then accessing this data will get from the cache , Protected back-end data sources ﹔

But there are two problems with this approach ︰
- 1、 If null values can be cached , This means that the cache needs more space to store more keys , Because there may be a lot of empty keys ;
- 2、 Even if expiration time is set for null value , There will be some inconsistency between the data of cache layer and storage layer for a period of time , This has an impact on businesses that need to be consistent .
Cache breakdown ( Too much )
Visit one key, You can find... In the cache , But the number of visits is too large , When the cache expires , All access requests hit the persistence server
example : Microblog server down ( Hot search : A hot news ( One location ) short time , High concurrency ))
Note the difference between and cache penetration , Cache breakdown , It means a key Very hot , Constantly carrying big concurrency , Large concurrent centralized access to this point , When this key At the moment of failure , Continuous large concurrency breaks through the cache , Direct request database , It's like cutting a hole in a barrier .
When a key At the moment of expiration , There are a lot of requests for concurrent access , This kind of data is generally hot data , Due to cache expiration , Will access the database at the same time to query the latest data , And write back to the cache , Will cause the database transient pressure is too large .
Solution
Never expired data settings
At the cache level , Expiration time is not set , So there will be no hot spots key Problems after expiration .
shortcoming :redis There are memory limitations , When the critical value is exceeded, the rewriting mechanism is triggered , Will delete and integrate some expired key, If one key If it is set to not expire, it will affect many later mechanisms
Add mutex lock
Distributed lock ∶ Using distributed locks , Guarantee for each key At the same time, there is only one thread to query the back-end service , Other threads do not have access to distributed locks , So just wait . In this way, the pressure of high concurrency is transferred to distributed locks , So the test of distributed locks is great .
Cache avalanche
Cache avalanche , At a certain time , Expiration in cache set . Or down ( power failure )
One of the reasons for the avalanche , For example, to double 11 o'clock , There will soon be a rush , This wave of commodity time is put into the cache , Suppose you cache for an hour . Then at one o'clock in the morning , The cache of this batch of goods has expired . And the access to this batch of products , It's all in the database , For databases , There will be periodic pressure peaks . So all the requests will reach the storage layer , The number of calls to the storage layer will skyrocket , Cause the storage layer to hang up .

Actually, the concentration is overdue , It's not very deadly , More deadly cache avalanche , It means that a node of the cache server is down or disconnected . Because of the cache avalanche formed naturally , Cache must be created in a certain time period , This is the time , The database can also withstand the pressure . It's just periodic pressure on the database . The cache service node is down , The pressure on the database server is unpredictable , It's likely to crush the database in an instant .
A double tenth : Shut down some services ,( Ensure that the main services are available !)
For example, if you want to refund on the day of double 11, you can't refund , It is mainly to ensure the normal progress and high availability of purchase services
Solution
redis High availability
The meaning of this idea is , since redis It's possible to hang up , I'll add more redis, After this one goes down, others can continue to work , In fact, it's a cluster built .
Current limiting the drop
The idea of this solution is , After cache failure , Control the number of threads that read the database write cache by locking or queuing . For example, to some key Only one thread is allowed to query data and write cache , Other threads wait .
Data preheating
Data heating means before deployment , I'll go through the possible data first , In this way, some of the data that may be accessed in large amounts will be loaded into the cache . Manually trigger loading cache before large concurrent access occurs key, Set different expiration times , Make the cache failure time as uniform as possible .
边栏推荐
- 【题解】Educational Codeforces Round 82
- 04_ 栈
- Add vector formula in rich text editor (MathType for TinyMCE, visual addition)
- LeetCode 2320. 统计放置房子的方式数
- Topology architecture of the minimum deployment of tidb cluster
- C语言实现N皇后问题
- TiDB混合部署拓扑
- Some Chinese character codes in the user privacy agreement are not standardized, which leads to the display of garbled codes on the web page. It needs to be found and handled uniformly
- It's no exaggeration to say that this is the most user-friendly basic tutorial of pytest I've ever seen
- AtCoder Beginner Contest 254
猜你喜欢

mathjax 入门(web显示数学公式,矢量的)
[email protected]: The platform “win32“ is incompatible with this module."/>info [email protected]: The platform “win32“ is incompatible with this module.

Table responsive layout tips

【无标题】LeetCode 2321. 拼接数组的最大分数

02_线性表_顺序表

LeetCode 2310. 个位数字为 K 的整数之和

Jenkins Pipeline 应用与实践

LeetCode - 搜索二维矩阵

Wechat applet uses towxml to display formula

蜻蜓低代码安全工具平台开发之路
随机推荐
C#代码审计实战+前置知识
Reuse and distribution
【C语言】详解指针的初阶和进阶以及注意点(1)
【C语音】详解指针进阶和注意点(2)
C thread transfer parameters
info [email protected] : The platform “win32“ is incompatible with this module.
LeetCode 2320. 统计放置房子的方式数
Edit the formula with MathType, and set it to include only mathjax syntax when copying and pasting
Tidb hybrid deployment topology
. Net core logging system
Mavn 搭建 Nexus 私服
AtCoder Beginner Contest 254
LeetCode 2320. Count the number of ways to place the house
Huawei interview question: no palindrome string
Slashgear shares 2021 life changing technology products, which are somewhat unexpected
Base64 coding can be understood this way
mathjax 入门(web显示数学公式,矢量的)
MFC CString to char*
LeetCode 209. 长度最小的子数组
Topology architecture of the minimum deployment of tidb cluster