当前位置:网站首页>[redis] cache warm-up, cache avalanche and cache breakdown
[redis] cache warm-up, cache avalanche and cache breakdown
2022-07-03 13:30:00 【Programming cheese】
Cache preheating
The phenomenon :
The server goes down quickly after it starts .
Troubleshoot problems
- The number of requests is high
- The data throughput between master and slave is large , The operation frequency of data synchronization is high
Solution
Preparatory work :
Daily routine statistical data access records , Statistics of hot data with high access frequency
If the amount of hot data is large , utilize LRU Data deletion strategy , Building a data retention queue ( Manual maintenance or storm+kafka etc. )
Preparations before startup :
Classify the data in the statistical results , According to the level ,redis Load high-level hot data first
Using distributed multi server to read data at the same time , Speed up the data loading process
The implementation of :
Use script program to fix trigger data warm-up process
If conditions permit , Use CDN( Content distribution network )
summary
Cache warm-up is before the system starts , Load the relevant cache data directly into the cache system in advance , So as to avoid when the user requests , Query database first , Then cache the data to redis The problem of . The user directly queries the pre heated data .
Cache avalanche
The phenomenon :

Troubleshooting :
- In a short time , More in the cache key Concentration expired ( The main problem )
- Request access to expired data during this cycle ,redis Not hit , Get data from the database
- The database receives a large number of requests at the same time and cannot process them in time
- Redis There's a huge backlog of requests , It's starting to time out
- Database traffic surges , Cause database crash
- After restarting the database Redis There is still no data available in the cache
- Redis Server resources are heavily occupied , Server crash
- Redis The cluster collapsed
- The application server can't get the data in time to respond to the request , There are more and more requests from clients , The application server crashed
- application server 、redis、 Restart all databases , The effect is not ideal .
resolvent
In design :
More static processing of page content
Building a multi-level cache architecture , Such as :Nginx cache +Redis cache +ehcache cache
testing Mysql Serious time-consuming business optimization ( Make great efforts to check the database : For example, timeout query 、 Time consuming, high transaction, etc )
Disaster warning mechanism
monitor Redis Server performance metrics
CPU Occupancy rate 、CPU Usage rate
Memory capacity
Average query response time
Number of threadsCurrent limiting 、 Downgrade
Sacrifice some user experience in a short time , Restrict access to some requests , Reduce application server pressure , Access will be gradually opened after the vulgar operation of the business .
For the problem itself :
6. LRU And LFU Switch
7. Data validity policy adjustment
- According to the business validity Classified peak staggering ( Different categories and expiration dates )
- The expiration time is fixed time + The form of random values , Dilution concentration matures key The number of
8. Super thermal data using permanent key
9. Regular maintenance ( Automatically + artificial )
Do traffic analysis for data that is about to expire , Confirm whether there is a delay , With the visit statistics , Do hot data delay
10. Lock ( Use with caution !)
summary
Cache avalanche means that the amount of expired data is too large , Causing pressure on the database server . If it can be effective Avoid expiration time concentration , It can effectively solve the emergence of avalanche phenomenon ( about 40%), Use with other strategies , And monitor the running data of the server , Make quick adjustment according to the operation record .
Cache breakdown
The phenomenon
The system runs smoothly , The number of database connections surged in an instant .
here Redis There are not many servers key Be overdue ,Redis Memory smooth , No fluctuations ,Redis The server CPU It's normal , But the database crashed .
Troubleshoot problems
- Redis One of them key Be overdue , The key The number of visits is huge
- Multiple requests for this data are pressed directly from the server to Redis in , None of them are named
- Redis A large number of accesses to the same data in the database are initiated in a short time
The difference between this and cache avalanche is that this is for a certain key cache , The former is a lot of key.
When the cache expires at a certain point in time , Right at this point in time Key There are a lot of concurrent requests coming , These requests usually find that the cache is expired from the back end DB Load data and reset to cache , At this time, a large number of concurrent requests may instantly put the back end DB Overwhelmed .
Solution
When the cache fails ( Judge that the value is empty ), Not immediately load db , Instead, use some operations with the return value of the successful operation of the caching tool first ( such as Redis Of SETNX) Go to set One mutex key , When the operation returns success , Proceed again load db And reset the cache ; otherwise , Just try the whole thing again get Caching method .
Cache penetration
The phenomenon
Cache penetration refers to when users query data , There is no... In the database , Naturally, there won't be . This leads to user queries when , Could not find... In cache , Go to the database every time . Query a data that must not exist . For example, the article list , Query a nonexistent id, Every time I visit DB, If someone sabotages , It's likely to be directly to DB Impact .
The solution is :
- If the query database is empty , Directly set a default value and store it in the cache , In this way, there will be value obtained in the second cache , Instead of continuing to access the database , This method is the simplest and roughest .
- Based on cached data key Filter according to the rules of , Like caching Key by mac Address . This requires that key There must be a top rule , This method can relieve part of the pressure , But it can't cure .
- Use of Blum filter , Hash all possible data to a large enough BitSet in , Data that doesn't exist will be intercepted , Thus, the query pressure on the underlying storage system is avoided . To put it bluntly , It is to use efficient data structures and algorithms to quickly determine your Key Whether it exists in the database .
The specific methods
Because the requested parameter is not valid ( Each time, a parameter that does not exist is requested ), So we can use a bloom filter (BloomFilter) Or compress filter Intercept ahead of time , This request is not allowed to the database layer if it is illegal !
When we can't find it from the database , We also set the empty object in the cache . The next time I ask , You can get it from the cache . In this case, we usually set a shorter expiration time for empty objects .
边栏推荐
- Image component in ETS development mode of openharmony application development
- Today's sleep quality record 77 points
- 今日睡眠质量记录77分
- untiy世界边缘的物体阴影闪动,靠近远点的物体阴影正常
- The shortage of graphics cards finally came to an end: 3070ti for more than 4000 yuan, 2000 yuan cheaper than the original price, and 3090ti
- Sword finger offer 14- ii Cut rope II
- Kivy tutorial how to automatically load kV files
- KEIL5出现中文字体乱码的解决方法
- Logback 日志框架
- Comprehensive evaluation of double chain notes remnote: fast input, PDF reading, interval repetition / memory
猜你喜欢

35道MySQL面试必问题图解,这样也太好理解了吧

Smbms project

Flink SQL knows why (19): the transformation between table and datastream (with source code)

When updating mysql, the condition is a query

MySQL functions and related cases and exercises

Flink SQL knows why (7): haven't you even seen the ETL and group AGG scenarios that are most suitable for Flink SQL?

STM32 and motor development (from MCU to architecture design)

Logseq evaluation: advantages, disadvantages, evaluation, learning tutorial

User and group command exercises

8皇后问题
随机推荐
[sort] bucket sort
Flick SQL knows why (10): everyone uses accumulate window to calculate cumulative indicators
正则表达式
Sword finger offer 14- ii Cut rope II
Logback log framework
网上开户哪家证券公司佣金最低,我要开户,网上客户经理开户安全吗
Smbms project
Box layout of Kivy tutorial BoxLayout arranges sub items in vertical or horizontal boxes (tutorial includes source code)
Road construction issues
The R language GT package and gtextras package gracefully and beautifully display tabular data: nflreadr package and gt of gtextras package_ plt_ The winloss function visualizes the win / loss values
实现CNN图像的识别和训练通过tensorflow框架对cifar10数据集等方法的处理
Resolved (error in viewing data information in machine learning) attributeerror: target_ names
Flink SQL knows why (13): is it difficult to join streams? (next)
Convolution emotion analysis task4
Setting up remote links to MySQL on Linux
Annotation and reflection
刚毕业的欧洲大学生,就能拿到美国互联网大厂 Offer?
MySQL
Asp. Net core1.1 without project JSON, so as to generate cross platform packages
Logseq 评测:优点、缺点、评价、学习教程