当前位置:网站首页>Redis' cache avalanche, cache breakdown, cache penetration, cache preheating, and cache degradation
Redis' cache avalanche, cache breakdown, cache penetration, cache preheating, and cache degradation
2022-06-29 14:14:00 【Full stack programmer webmaster】
One 、 Cache avalanche :
1、 What is a cache avalanche :
If there is a large-scale at a certain time key invalid , It will result in a large number of requests hitting the database , The database is under a lot of pressure , In the case of high concurrency , The database may be down in an instant . At this time, if the O & M immediately restarts the database , Soon there will be new traffic to kill the database . This is the cache avalanche .
2、 Problem analysis :
The key to the cache avalanche lies in the large-scale... At the same time key invalid , Why does this happen , There are two main possibilities : The first is Redis Downtime , The second possibility is to use the same expiration time . After we figure out why , So what's the solution ?
3、 Solution :
(1) In advance :
① Even expiration : Set different expiration times , Make cache invalidation as even as possible , Avoid cache avalanche caused by the same expiration time , Resulting in a large number of database access .
② Hierarchical caching : Based on the failure of the first level cache , Access the L2 cache , The expiration time of each level of cache is different .
③ Hotspot data caches never expire .
Never expired actually has two meanings :
- Physics doesn't expire , For hot spots key Do not set expiration time
- Logical expiration , Save the expiration time key Corresponding value in , If it's found to be overdue , Build the cache through a background asynchronous thread
④ Guarantee Redis High availability of cache , prevent Redis The problem of cache avalanche caused by downtime . have access to Master-slave + sentry ,Redis Cluster to avoid Redis The total collapse .
(2) In the matter :
① The mutex : After cache failure , Control the number of threads that read and write data through mutex or queue , For example, a certain key Only one thread is allowed to query data and write cache , Other threads wait . This will block other threads , At this point, the throughput of the system will drop
② Use circuit breakers , Current limiting the drop . When the traffic reaches a certain threshold , Go straight back to “ System congestion ” And so on , Prevent too many requests from hitting the database and destroy the database , At least some users can use it normally , Other users can also get results by refreshing several times .
(3) After the event :
① Turn on Redis Persistence mechanism , Recover the cached data as soon as possible , Once the restart , Can automatically load data from the disk to restore the data in memory .
Two 、 Cache breakdown :
1、 What is cache breakdown :
Buffer breakdown is a bit like buffer avalanche , The cache avalanche is massive key invalid , Cache breakdown is a hot spot key invalid , Large concurrency sets request for it , It will cause a large number of requests to read and the cache will not read the data , This leads to high concurrent access to the database , Causing a surge in database pressure . This phenomenon is called buffer breakdown .
2、 Problem analysis :
The key lies in the key It doesn't work , Leading to a large concurrency concentrated on the database . So we should solve it from two aspects , First, whether hot spots can be considered key Do not set expiration time , Second, whether we can consider reducing the number of requests on the database .
3、 Solution :
(1) After cache failure , Control the number of threads that read and write data through mutex or queue , For example, a certain key Only one thread is allowed to query data and write cache , Other threads wait . This will block other threads , At this point, the throughput of the system will drop
(2) Hotspot data caches never expire .
Never expired actually has two meanings :
- Physics doesn't expire , For hot spots key Do not set expiration time
- Logical expiration , Save the expiration time key Corresponding value in , If it's found to be overdue , Build the cache through a background asynchronous thread
3、 ... and 、 Cache penetration :
1、 What is cache penetration :
Cache penetration refers to that the data requested by the user does not exist in the cache, that is, there is no hit , And it doesn't exist in the database , As a result, every time the user requests the data, he / she has to query the database again . If a malicious attacker keeps asking for data that doesn't exist in the system , It will result in a large number of requests falling on the database in a short time , Cause too much database pressure , Can't even afford to crash the database .
2、 Problem analysis :
The key to cache penetration is Redis We can't find key value , The fundamental difference between it and cache breakdown is that it comes in key stay Redis There is no such thing as . If a hacker sends in a large number of nonexistent key, So a large number of requests hit the database is a fatal problem , So in the daily development of the parameters to do a good check , Some illegal parameters , It's impossible to exist key Return the error prompt directly .
3、 resolvent :
(1) Will be invalid key Stored in Redis in :
When there is a Redis We can't find the data , The database can't find the data , So let's just take this key Save to Redis in , Set up value=”null”, And set its expiration time very short , Later, query this key At the request of , Go straight back to null, You don't need to query the database anymore . But there are problems with this approach , If the one that comes in doesn't exist Key The value is random every time , Then save it Redis It doesn't make sense .
(2) Using the bloon filter :
If the bloom filter determines that key It's not in the bloom filter , Then there must be no , If you decide that key There is , So it's very likely that there is ( There is a certain rate of miscalculation ). So we can add a bloom filter before caching , All of the key It's all stored in a bloom filter , In the query Redis Go to the bloom filter first key Whether there is , If it doesn't exist, go straight back , Don't let it access the database , Thus, the query pressure on the underlying storage system is avoided .
How to choose : For some malicious attacks , The attack brought a lot of key It's random , Then we use the first scheme to cache a large number of nonexistent key The data of . Then this kind of plan is not suitable , We can filter out these by using the bloom filter scheme first key. therefore , In view of this key There are so many 、 Data with low request repetition rate , Give priority to the second option and filter it out directly . And for empty data key Limited , High repetition rate , The first method is preferred for caching .
Four 、 Cache preheating :
1、 What is cache preheating :
Cache warm-up refers to when the system is online , Load the relevant cache data to the cache system in advance . Avoid when the user requests , Query database first , And then cache the data , Users directly query the pre heated cache data .
If you don't preheat , that Redis The initial state data is empty , In the early stage of system launch , For high concurrency traffic , Will access the database , Pressure on database traffic .
2、 Cache preheating solution :
(1) When the amount of data is small , Load the cache when the project starts ;
(2) When there's a lot of data , Set up a timed task script , Refresh the cache ;
(3) When there's too much data , The priority is to ensure that the hot data is loaded into the cache in advance .
5、 ... and 、 Cache degradation :
Cache degradation refers to cache failure or cache server down , No access to the database , Directly return the default data or access the memory data of the service . Degradation is generally a lossy operation , So try to reduce the impact of the downgrade on the business .
In project practice, some hot data is usually cached in the memory of the service , In this way, once the cache is abnormal , The memory data of the service can be used directly , To avoid huge pressure on the database .
Publisher : Full stack programmer stack length , Reprint please indicate the source :https://javaforall.cn/100027.html Link to the original text :https://javaforall.cn
边栏推荐
- matplotlib直方图,柱状图
- Introduction to reverse commissioning -pe file section table and block 03/07
- MySQL 1146 error [easy to understand]
- 内网穿透(nc)
- go-zero微服务实战系列(七、请求量这么高该如何优化)
- 灵感收集·创意写作软件评测:Flomo、Obsidian Memo、Napkin、FlowUs
- 深度学习的坎坷六十年
- I talked about exception handling for more than half an hour during the interview yesterday
- Dynamics 365Online Lookup查找字段多选
- JUC多线程:线程池的创建及工作原理
猜你喜欢

广州开展瓶装气安全宣传活动,普及燃气安全知识

Leetcode question brushing: String 07 (repeated substring)

单端口RAM实现FIFO

leetcode:226. Flip binary tree

How goby exports scan results

vmware虚拟机的作用

传输层 选择性确认 SACK

VQA不只需要图片,还需要外部知识!华盛顿大学&微软提出提出REVIVE,用GPT-3和Wikidata来辅助回答问题!...

Dynamics 365Online Lookup查找字段多选

I talked about exception handling for more than half an hour during the interview yesterday
随机推荐
用手机在指南针上开户靠谱吗?这样炒股有没有什么安全隐患
靠代理,靠买断,国产端游的蛮荒时代等待下一个《永劫无间》
Redis的数据过期清除策略 与 内存淘汰策略
【blackduck】jenkins下配置指定的synopsys-detect扫描版本
leetcode:226. 翻转二叉树
Seekg() [easy to understand]
Tiktok's global short video dominance may be reversed by YouTube
C keyboard hook
Wechat applet: install B artifact and P diagram, modify wechat traffic main applet source code, Download funny joke diagram, make server free domain name
常用postgresql数据操作备忘:时间
微信小程序:万圣节头像框生成工具
Industry analysis - quick intercom, building intercom
[network bandwidth] Mbps & Mbps
【置顶】博客使用须知,公告板,留言板,关于博主
Why does ETL often become ELT or even let?
Turbulent intermediary business, restless renters
MySQL intercepts the string to remove duplication, and MySQL intercepts the string to remove reassembly
Which is better and safer for Dongguan Humen securities company to open a stock account?
微信小程序:大红喜庆版UI猜灯谜又叫猜字谜
单端口RAM实现FIFO