当前位置:网站首页>[redis] cache warm-up, cache avalanche and cache breakdown
[redis] cache warm-up, cache avalanche and cache breakdown
2022-07-03 13:30:00 【Programming cheese】
Cache preheating
The phenomenon :
The server goes down quickly after it starts .
Troubleshoot problems
- The number of requests is high
- The data throughput between master and slave is large , The operation frequency of data synchronization is high
Solution
Preparatory work :
Daily routine statistical data access records , Statistics of hot data with high access frequency
If the amount of hot data is large , utilize LRU Data deletion strategy , Building a data retention queue ( Manual maintenance or storm+kafka etc. )
Preparations before startup :
Classify the data in the statistical results , According to the level ,redis Load high-level hot data first
Using distributed multi server to read data at the same time , Speed up the data loading process
The implementation of :
Use script program to fix trigger data warm-up process
If conditions permit , Use CDN( Content distribution network )
summary
Cache warm-up is before the system starts , Load the relevant cache data directly into the cache system in advance , So as to avoid when the user requests , Query database first , Then cache the data to redis The problem of . The user directly queries the pre heated data .
Cache avalanche
The phenomenon :
Troubleshooting :
- In a short time , More in the cache key Concentration expired ( The main problem )
- Request access to expired data during this cycle ,redis Not hit , Get data from the database
- The database receives a large number of requests at the same time and cannot process them in time
- Redis There's a huge backlog of requests , It's starting to time out
- Database traffic surges , Cause database crash
- After restarting the database Redis There is still no data available in the cache
- Redis Server resources are heavily occupied , Server crash
- Redis The cluster collapsed
- The application server can't get the data in time to respond to the request , There are more and more requests from clients , The application server crashed
- application server 、redis、 Restart all databases , The effect is not ideal .
resolvent
In design :
More static processing of page content
Building a multi-level cache architecture , Such as :Nginx cache +Redis cache +ehcache cache
testing Mysql Serious time-consuming business optimization ( Make great efforts to check the database : For example, timeout query 、 Time consuming, high transaction, etc )
Disaster warning mechanism
monitor Redis Server performance metrics
CPU Occupancy rate 、CPU Usage rate
Memory capacity
Average query response time
Number of threadsCurrent limiting 、 Downgrade
Sacrifice some user experience in a short time , Restrict access to some requests , Reduce application server pressure , Access will be gradually opened after the vulgar operation of the business .
For the problem itself :
6. LRU And LFU Switch
7. Data validity policy adjustment
- According to the business validity Classified peak staggering ( Different categories and expiration dates )
- The expiration time is fixed time + The form of random values , Dilution concentration matures key The number of
8. Super thermal data using permanent key
9. Regular maintenance ( Automatically + artificial )
Do traffic analysis for data that is about to expire , Confirm whether there is a delay , With the visit statistics , Do hot data delay
10. Lock ( Use with caution !)
summary
Cache avalanche means that the amount of expired data is too large , Causing pressure on the database server . If it can be effective Avoid expiration time concentration , It can effectively solve the emergence of avalanche phenomenon ( about 40%), Use with other strategies , And monitor the running data of the server , Make quick adjustment according to the operation record .
Cache breakdown
The phenomenon
The system runs smoothly , The number of database connections surged in an instant .
here Redis There are not many servers key Be overdue ,Redis Memory smooth , No fluctuations ,Redis The server CPU It's normal , But the database crashed .
Troubleshoot problems
- Redis One of them key Be overdue , The key The number of visits is huge
- Multiple requests for this data are pressed directly from the server to Redis in , None of them are named
- Redis A large number of accesses to the same data in the database are initiated in a short time
The difference between this and cache avalanche is that this is for a certain key cache , The former is a lot of key.
When the cache expires at a certain point in time , Right at this point in time Key There are a lot of concurrent requests coming , These requests usually find that the cache is expired from the back end DB Load data and reset to cache , At this time, a large number of concurrent requests may instantly put the back end DB Overwhelmed .
Solution
When the cache fails ( Judge that the value is empty ), Not immediately load db , Instead, use some operations with the return value of the successful operation of the caching tool first ( such as Redis Of SETNX) Go to set One mutex key , When the operation returns success , Proceed again load db And reset the cache ; otherwise , Just try the whole thing again get Caching method .
Cache penetration
The phenomenon
Cache penetration refers to when users query data , There is no... In the database , Naturally, there won't be . This leads to user queries when , Could not find... In cache , Go to the database every time . Query a data that must not exist . For example, the article list , Query a nonexistent id, Every time I visit DB, If someone sabotages , It's likely to be directly to DB Impact .
The solution is :
- If the query database is empty , Directly set a default value and store it in the cache , In this way, there will be value obtained in the second cache , Instead of continuing to access the database , This method is the simplest and roughest .
- Based on cached data key Filter according to the rules of , Like caching Key by mac Address . This requires that key There must be a top rule , This method can relieve part of the pressure , But it can't cure .
- Use of Blum filter , Hash all possible data to a large enough BitSet in , Data that doesn't exist will be intercepted , Thus, the query pressure on the underlying storage system is avoided . To put it bluntly , It is to use efficient data structures and algorithms to quickly determine your Key Whether it exists in the database .
The specific methods
Because the requested parameter is not valid ( Each time, a parameter that does not exist is requested ), So we can use a bloom filter (BloomFilter) Or compress filter Intercept ahead of time , This request is not allowed to the database layer if it is illegal !
When we can't find it from the database , We also set the empty object in the cache . The next time I ask , You can get it from the cache . In this case, we usually set a shorter expiration time for empty objects .
边栏推荐
- [today in history] July 3: ergonomic standards act; The birth of pioneers in the field of consumer electronics; Ubisoft releases uplay
- 网上开户哪家证券公司佣金最低,我要开户,网上客户经理开户安全吗
- Flink SQL knows why (XIV): the way to optimize the performance of dimension table join (Part 1) with source code
- Error handling when adding files to SVN:.... \conf\svnserve conf:12: Option expected
- Sword finger offer 16 Integer power of numeric value
- 8皇后问题
- 【被动收入如何挣个一百万】
- Start signing up CCF C ³- [email protected] chianxin: Perspective of Russian Ukrainian cyber war - Security confrontation and sanctions g
- Luogup3694 Bangbang chorus standing in line
- Some thoughts on business
猜你喜欢
Detailed explanation of multithreading
Sword finger offer 12 Path in matrix
(first) the most complete way to become God of Flink SQL in history (full text 180000 words, 138 cases, 42 pictures)
MyCms 自媒体商城 v3.4.1 发布,使用手册更新
Introduction to the implementation principle of rxjs observable filter operator
Flick SQL knows why (10): everyone uses accumulate window to calculate cumulative indicators
When updating mysql, the condition is a query
Kivy教程之 如何自动载入kv文件
Flink SQL knows why (19): the transformation between table and datastream (with source code)
[Database Principle and Application Tutorial (4th Edition | wechat Edition) Chen Zhibo] [Chapter IV exercises]
随机推荐
The shortage of graphics cards finally came to an end: 3070ti for more than 4000 yuan, 2000 yuan cheaper than the original price, and 3090ti
Kivy教程之 盒子布局 BoxLayout将子项排列在垂直或水平框中(教程含源码)
Anan's doubts
8皇后问题
Flink SQL knows why (XIV): the way to optimize the performance of dimension table join (Part 1) with source code
双链笔记 RemNote 综合评测:快速输入、PDF 阅读、间隔重复/记忆
Useful blog links
Tutoriel PowerPoint, comment enregistrer une présentation sous forme de vidéo dans Powerpoint?
Swiftui development experience: the five most powerful principles that a programmer needs to master
Fabric. JS three methods of changing pictures (including changing pictures in the group and caching)
json序列化时案例总结
35道MySQL面试必问题图解,这样也太好理解了吧
Open PHP error prompt under Ubuntu 14.04
PowerPoint tutorial, how to save a presentation as a video in PowerPoint?
Sword finger offer 14- I. cut rope
Slf4j log facade
The reasons why there are so many programming languages in programming internal skills
道路建设问题
【历史上的今天】7 月 3 日:人体工程学标准法案;消费电子领域先驱诞生;育碧发布 Uplay
106. How to improve the readability of SAP ui5 application routing URL