当前位置:网站首页>[redis] cache warm-up, cache avalanche and cache breakdown
[redis] cache warm-up, cache avalanche and cache breakdown
2022-07-03 13:30:00 【Programming cheese】
Cache preheating
The phenomenon :
The server goes down quickly after it starts .
Troubleshoot problems
- The number of requests is high
- The data throughput between master and slave is large , The operation frequency of data synchronization is high
Solution
Preparatory work :
Daily routine statistical data access records , Statistics of hot data with high access frequency
If the amount of hot data is large , utilize LRU Data deletion strategy , Building a data retention queue ( Manual maintenance or storm+kafka etc. )
Preparations before startup :
Classify the data in the statistical results , According to the level ,redis Load high-level hot data first
Using distributed multi server to read data at the same time , Speed up the data loading process
The implementation of :
Use script program to fix trigger data warm-up process
If conditions permit , Use CDN( Content distribution network )
summary
Cache warm-up is before the system starts , Load the relevant cache data directly into the cache system in advance , So as to avoid when the user requests , Query database first , Then cache the data to redis The problem of . The user directly queries the pre heated data .
Cache avalanche
The phenomenon :
Troubleshooting :
- In a short time , More in the cache key Concentration expired ( The main problem )
- Request access to expired data during this cycle ,redis Not hit , Get data from the database
- The database receives a large number of requests at the same time and cannot process them in time
- Redis There's a huge backlog of requests , It's starting to time out
- Database traffic surges , Cause database crash
- After restarting the database Redis There is still no data available in the cache
- Redis Server resources are heavily occupied , Server crash
- Redis The cluster collapsed
- The application server can't get the data in time to respond to the request , There are more and more requests from clients , The application server crashed
- application server 、redis、 Restart all databases , The effect is not ideal .
resolvent
In design :
More static processing of page content
Building a multi-level cache architecture , Such as :Nginx cache +Redis cache +ehcache cache
testing Mysql Serious time-consuming business optimization ( Make great efforts to check the database : For example, timeout query 、 Time consuming, high transaction, etc )
Disaster warning mechanism
monitor Redis Server performance metrics
CPU Occupancy rate 、CPU Usage rate
Memory capacity
Average query response time
Number of threadsCurrent limiting 、 Downgrade
Sacrifice some user experience in a short time , Restrict access to some requests , Reduce application server pressure , Access will be gradually opened after the vulgar operation of the business .
For the problem itself :
6. LRU And LFU Switch
7. Data validity policy adjustment
- According to the business validity Classified peak staggering ( Different categories and expiration dates )
- The expiration time is fixed time + The form of random values , Dilution concentration matures key The number of
8. Super thermal data using permanent key
9. Regular maintenance ( Automatically + artificial )
Do traffic analysis for data that is about to expire , Confirm whether there is a delay , With the visit statistics , Do hot data delay
10. Lock ( Use with caution !)
summary
Cache avalanche means that the amount of expired data is too large , Causing pressure on the database server . If it can be effective Avoid expiration time concentration , It can effectively solve the emergence of avalanche phenomenon ( about 40%), Use with other strategies , And monitor the running data of the server , Make quick adjustment according to the operation record .
Cache breakdown
The phenomenon
The system runs smoothly , The number of database connections surged in an instant .
here Redis There are not many servers key Be overdue ,Redis Memory smooth , No fluctuations ,Redis The server CPU It's normal , But the database crashed .
Troubleshoot problems
- Redis One of them key Be overdue , The key The number of visits is huge
- Multiple requests for this data are pressed directly from the server to Redis in , None of them are named
- Redis A large number of accesses to the same data in the database are initiated in a short time
The difference between this and cache avalanche is that this is for a certain key cache , The former is a lot of key.
When the cache expires at a certain point in time , Right at this point in time Key There are a lot of concurrent requests coming , These requests usually find that the cache is expired from the back end DB Load data and reset to cache , At this time, a large number of concurrent requests may instantly put the back end DB Overwhelmed .
Solution
When the cache fails ( Judge that the value is empty ), Not immediately load db , Instead, use some operations with the return value of the successful operation of the caching tool first ( such as Redis Of SETNX) Go to set One mutex key , When the operation returns success , Proceed again load db And reset the cache ; otherwise , Just try the whole thing again get Caching method .
Cache penetration
The phenomenon
Cache penetration refers to when users query data , There is no... In the database , Naturally, there won't be . This leads to user queries when , Could not find... In cache , Go to the database every time . Query a data that must not exist . For example, the article list , Query a nonexistent id, Every time I visit DB, If someone sabotages , It's likely to be directly to DB Impact .
The solution is :
- If the query database is empty , Directly set a default value and store it in the cache , In this way, there will be value obtained in the second cache , Instead of continuing to access the database , This method is the simplest and roughest .
- Based on cached data key Filter according to the rules of , Like caching Key by mac Address . This requires that key There must be a top rule , This method can relieve part of the pressure , But it can't cure .
- Use of Blum filter , Hash all possible data to a large enough BitSet in , Data that doesn't exist will be intercepted , Thus, the query pressure on the underlying storage system is avoided . To put it bluntly , It is to use efficient data structures and algorithms to quickly determine your Key Whether it exists in the database .
The specific methods
Because the requested parameter is not valid ( Each time, a parameter that does not exist is requested ), So we can use a bloom filter (BloomFilter) Or compress filter Intercept ahead of time , This request is not allowed to the database layer if it is illegal !
When we can't find it from the database , We also set the empty object in the cache . The next time I ask , You can get it from the cache . In this case, we usually set a shorter expiration time for empty objects .
边栏推荐
- [Database Principle and Application Tutorial (4th Edition | wechat Edition) Chen Zhibo] [Chapter 7 exercises]
- JSON serialization case summary
- [Database Principle and Application Tutorial (4th Edition | wechat Edition) Chen Zhibo] [sqlserver2012 comprehensive exercise]
- Flutter动态化 | Fair 2.5.0 新版本特性
- Kivy教程之 盒子布局 BoxLayout将子项排列在垂直或水平框中(教程含源码)
- 刚毕业的欧洲大学生,就能拿到美国互联网大厂 Offer?
- The difference between stratifiedkfold (classification) and kfold (regression)
- February 14, 2022, incluxdb survey - mind map
- Tutoriel PowerPoint, comment enregistrer une présentation sous forme de vidéo dans Powerpoint?
- Libuv库 - 设计概述(中文版)
猜你喜欢
Flink SQL knows why (16): dlink, a powerful tool for developing enterprises with Flink SQL
Flink code is written like this. It's strange that the window can be triggered (bad programming habits)
道路建设问题
MySQL installation, uninstallation, initial password setting and general commands of Linux
已解决(机器学习中查看数据信息报错)AttributeError: target_names
Sword finger offer 12 Path in matrix
Libuv库 - 设计概述(中文版)
JSP and filter
[Database Principle and Application Tutorial (4th Edition | wechat Edition) Chen Zhibo] [sqlserver2012 comprehensive exercise]
Flink SQL knows why (7): haven't you even seen the ETL and group AGG scenarios that are most suitable for Flink SQL?
随机推荐
[Database Principle and Application Tutorial (4th Edition | wechat Edition) Chen Zhibo] [Chapter 6 exercises]
Flink SQL knows why (12): is it difficult to join streams? (top)
php:  The document cannot be displayed in Chinese
SVN添加文件时的错误处理:…\conf\svnserve.conf:12: Option expected
[Database Principle and Application Tutorial (4th Edition | wechat Edition) Chen Zhibo] [Chapter 7 exercises]
The reasons why there are so many programming languages in programming internal skills
R language uses the data function to obtain the sample datasets available in the current R environment: obtain all the sample datasets in the datasets package, obtain the datasets of all packages, and
In the promotion season, how to reduce the preparation time of defense materials by 50% and adjust the mentality (personal experience summary)
2022-02-14 analysis of the startup and request processing process of the incluxdb cluster Coordinator
untiy世界边缘的物体阴影闪动,靠近远点的物体阴影正常
Mysqlbetween implementation selects the data range between two values
PowerPoint 教程,如何在 PowerPoint 中將演示文稿另存為視頻?
Flutter动态化 | Fair 2.5.0 新版本特性
MapReduce实现矩阵乘法–实现代码
[Database Principle and Application Tutorial (4th Edition | wechat Edition) Chen Zhibo] [Chapter IV exercises]
编程内功之编程语言众多的原因
18W word Flink SQL God Road manual, born in the sky
开始报名丨CCF C³[email protected]奇安信:透视俄乌网络战 —— 网络空间基础设施面临的安全对抗与制裁博弈...
今日睡眠质量记录77分
CVPR 2022 | 美团技术团队精选6篇优秀论文解读