当前位置:网站首页>Redis' cache avalanche, cache breakdown, cache penetration, cache preheating, and cache degradation
Redis' cache avalanche, cache breakdown, cache penetration, cache preheating, and cache degradation
2022-06-29 14:14:00 【Full stack programmer webmaster】
One 、 Cache avalanche :
1、 What is a cache avalanche :
If there is a large-scale at a certain time key invalid , It will result in a large number of requests hitting the database , The database is under a lot of pressure , In the case of high concurrency , The database may be down in an instant . At this time, if the O & M immediately restarts the database , Soon there will be new traffic to kill the database . This is the cache avalanche .
2、 Problem analysis :
The key to the cache avalanche lies in the large-scale... At the same time key invalid , Why does this happen , There are two main possibilities : The first is Redis Downtime , The second possibility is to use the same expiration time . After we figure out why , So what's the solution ?
3、 Solution :
(1) In advance :
① Even expiration : Set different expiration times , Make cache invalidation as even as possible , Avoid cache avalanche caused by the same expiration time , Resulting in a large number of database access .
② Hierarchical caching : Based on the failure of the first level cache , Access the L2 cache , The expiration time of each level of cache is different .
③ Hotspot data caches never expire .
Never expired actually has two meanings :
- Physics doesn't expire , For hot spots key Do not set expiration time
- Logical expiration , Save the expiration time key Corresponding value in , If it's found to be overdue , Build the cache through a background asynchronous thread
④ Guarantee Redis High availability of cache , prevent Redis The problem of cache avalanche caused by downtime . have access to Master-slave + sentry ,Redis Cluster to avoid Redis The total collapse .
(2) In the matter :
① The mutex : After cache failure , Control the number of threads that read and write data through mutex or queue , For example, a certain key Only one thread is allowed to query data and write cache , Other threads wait . This will block other threads , At this point, the throughput of the system will drop
② Use circuit breakers , Current limiting the drop . When the traffic reaches a certain threshold , Go straight back to “ System congestion ” And so on , Prevent too many requests from hitting the database and destroy the database , At least some users can use it normally , Other users can also get results by refreshing several times .
(3) After the event :
① Turn on Redis Persistence mechanism , Recover the cached data as soon as possible , Once the restart , Can automatically load data from the disk to restore the data in memory .
Two 、 Cache breakdown :
1、 What is cache breakdown :
Buffer breakdown is a bit like buffer avalanche , The cache avalanche is massive key invalid , Cache breakdown is a hot spot key invalid , Large concurrency sets request for it , It will cause a large number of requests to read and the cache will not read the data , This leads to high concurrent access to the database , Causing a surge in database pressure . This phenomenon is called buffer breakdown .
2、 Problem analysis :
The key lies in the key It doesn't work , Leading to a large concurrency concentrated on the database . So we should solve it from two aspects , First, whether hot spots can be considered key Do not set expiration time , Second, whether we can consider reducing the number of requests on the database .
3、 Solution :
(1) After cache failure , Control the number of threads that read and write data through mutex or queue , For example, a certain key Only one thread is allowed to query data and write cache , Other threads wait . This will block other threads , At this point, the throughput of the system will drop
(2) Hotspot data caches never expire .
Never expired actually has two meanings :
- Physics doesn't expire , For hot spots key Do not set expiration time
- Logical expiration , Save the expiration time key Corresponding value in , If it's found to be overdue , Build the cache through a background asynchronous thread
3、 ... and 、 Cache penetration :
1、 What is cache penetration :
Cache penetration refers to that the data requested by the user does not exist in the cache, that is, there is no hit , And it doesn't exist in the database , As a result, every time the user requests the data, he / she has to query the database again . If a malicious attacker keeps asking for data that doesn't exist in the system , It will result in a large number of requests falling on the database in a short time , Cause too much database pressure , Can't even afford to crash the database .
2、 Problem analysis :
The key to cache penetration is Redis We can't find key value , The fundamental difference between it and cache breakdown is that it comes in key stay Redis There is no such thing as . If a hacker sends in a large number of nonexistent key, So a large number of requests hit the database is a fatal problem , So in the daily development of the parameters to do a good check , Some illegal parameters , It's impossible to exist key Return the error prompt directly .
3、 resolvent :
(1) Will be invalid key Stored in Redis in :
When there is a Redis We can't find the data , The database can't find the data , So let's just take this key Save to Redis in , Set up value=”null”, And set its expiration time very short , Later, query this key At the request of , Go straight back to null, You don't need to query the database anymore . But there are problems with this approach , If the one that comes in doesn't exist Key The value is random every time , Then save it Redis It doesn't make sense .
(2) Using the bloon filter :
If the bloom filter determines that key It's not in the bloom filter , Then there must be no , If you decide that key There is , So it's very likely that there is ( There is a certain rate of miscalculation ). So we can add a bloom filter before caching , All of the key It's all stored in a bloom filter , In the query Redis Go to the bloom filter first key Whether there is , If it doesn't exist, go straight back , Don't let it access the database , Thus, the query pressure on the underlying storage system is avoided .
How to choose : For some malicious attacks , The attack brought a lot of key It's random , Then we use the first scheme to cache a large number of nonexistent key The data of . Then this kind of plan is not suitable , We can filter out these by using the bloom filter scheme first key. therefore , In view of this key There are so many 、 Data with low request repetition rate , Give priority to the second option and filter it out directly . And for empty data key Limited , High repetition rate , The first method is preferred for caching .
Four 、 Cache preheating :
1、 What is cache preheating :
Cache warm-up refers to when the system is online , Load the relevant cache data to the cache system in advance . Avoid when the user requests , Query database first , And then cache the data , Users directly query the pre heated cache data .
If you don't preheat , that Redis The initial state data is empty , In the early stage of system launch , For high concurrency traffic , Will access the database , Pressure on database traffic .
2、 Cache preheating solution :
(1) When the amount of data is small , Load the cache when the project starts ;
(2) When there's a lot of data , Set up a timed task script , Refresh the cache ;
(3) When there's too much data , The priority is to ensure that the hot data is loaded into the cache in advance .
5、 ... and 、 Cache degradation :
Cache degradation refers to cache failure or cache server down , No access to the database , Directly return the default data or access the memory data of the service . Degradation is generally a lossy operation , So try to reduce the impact of the downgrade on the business .
In project practice, some hot data is usually cached in the memory of the service , In this way, once the cache is abnormal , The memory data of the service can be used directly , To avoid huge pressure on the database .
Publisher : Full stack programmer stack length , Reprint please indicate the source :https://javaforall.cn/100027.html Link to the original text :https://javaforall.cn
边栏推荐
- Stable currency risk profile: are usdt and usdc safe?
- 3d立体相册,情人节,情侣生日礼物代码适用
- uniApp问题清单与经验
- 【黑马早报】中公教育市值蒸发逾2000亿;新东方直播粉丝破2000万;HM关闭中国首店;万科郁亮称房地产已触底;微信上线“大爆炸”功能...
- Goby full port scan
- Crazy digital collections, the next myth of making wealth?
- Applet Wechat: un nouveau réseau exclusif de microgroupes de développement de Cloud
- Introduction to esp8266: three programming methods "suggestions collection"
- [use of veux developer tools - use of getters]
- leetcode:226. 翻转二叉树
猜你喜欢

【置顶】博客使用须知,公告板,留言板,关于博主

【blackduck】jenkins下配置指定的synopsys-detect扫描版本

Uncover the practice of Baidu intelligent test in the field of automatic test execution

vmware虚拟机的作用

goby全端口扫描

matplotlib直方图,柱状图

广州开展瓶装气安全宣传活动,普及燃气安全知识

By proxy, by buyout, the wild era of domestic end-to-end travel is waiting for the next "eternal robbery"

Introduction to reverse commissioning -pe file section table and block 03/07

TikTok全球短视频霸主地位或被YouTube反超
随机推荐
节点数据采集和标签信息的远程洪泛传输
微信小程序:万圣节头像框生成工具
Redis为什么这么快?Redis是单线程还是多线程?
微信小程序:(更新)云开发微群人脉
瑞达期货可以开户吗?安全可靠吗?
Turbulent intermediary business, restless renters
每周 Postgres 世界动态 2022w25
内网穿透(nc)
go-zero微服务实战系列(七、请求量这么高该如何优化)
数字IC手撕代码--交通灯
现场快递柜状态采集与控制系统
文物数字藏品,开启文化传承的新方式
Characteristics of human immaturity
微信小程序:全新独家云开发微群人脉
ES6 array method
单端口RAM实现FIFO
MySQL数据库:使用show profile命令分析性能
广州开展瓶装气安全宣传活动,普及燃气安全知识
【shell】jenkins shell实现自动部署
Follow me study hcie big data mining Chapter 1 Introduction to data mining module 1