当前位置:网站首页>3 big questions! Redis cache exceptions and handling scheme summary
3 big questions! Redis cache exceptions and handling scheme summary
2022-06-22 23:58:00 【Tencent cloud developer】
Introduction | Redis As a high-performance memory key-value Data structure storage system , It is widely used in our daily development 、 Counter 、 Message queue 、 In leaderboard and other scenarios , Especially as the most commonly used caching method , In improving the efficiency of data query 、 The protection of databases has played an indelible role , But in practice , There may be some Redis Cache exceptions , This paper mainly focuses on Redis The cache exception and its handling scheme are summarized .
One 、 background
Redis It's completely open source 、 comply with BSD Agreed 、 High performance key-value Data structure storage system , It supports data persistence , The data in memory can be saved on disk , And it doesn't just support simple key-value Data of type , It also provides list,set,zset,hash Such as data structure storage , Very powerful ,Redis It also supports data backup , namely master-slave Mode data backup , To improve usability . Of course, the most important thing is to read and write fast , As the most commonly used caching scheme in our daily development, it is widely used . But in practice , It will have cache avalanches 、 Abnormal conditions such as cache breakdown and cache penetration , If you ignore these situations, it may bring disastrous consequences , The following mainly analyzes and summarizes these cache exceptions and common handling schemes .
Two 、 Cache avalanche
( One ) What is it?
Should have been in... For some time redis A large number of requests processed in the cache , Are sent to the database for processing , As a result, the pressure on the database increases rapidly , In severe cases, it may even cause the database to crash , This leads to the collapse of the whole system , Like an avalanche , Cause a chain effect , So it is called cache avalanche .
( Two ) Why?
There are two common reasons for the above situation :
- A large number of cached data expire at the same time , As a result, the data that should have been requested to the cache needs to be retrieved from the database .
- redis It's not working , Unable to process request , Naturally, the request will be sent to the database .
( 3、 ... and ) What do I do
For the case that a large number of cached data expire at the same time :
- When the expiration time is actually set , A large number of key Scenarios that expire at the same time , If there is , Then by random 、 fine-tuning 、 Set the expiration time by uniform setting , So as to avoid expiration at the same time .
- Add mutex , So that the operation of building the cache will not be carried out at the same time .
- double key Strategy , Lord key Is the original cache , To prepare key Cache for copy , Lord key When the failure , You can access the backup key, Lord key Cache expiration time is set to short-term , To prepare key Set to long term .
- Background update cache policy , It is carried out in the form of scheduled tasks or message queues redis Cache update or remove, etc .
in the light of redis Failure of itself :
- At the preventive level , A highly available cluster can be built through the master-slave node , That is, to realize the Lord Redis When the instance fails , There are other slave libraries that can be quickly switched to the master library , Continued provision of services .
- If something has happened , That is to prevent the database from crashing due to a large number of requests , Service fusing or current limiting can be adopted . Of course, the service was a little rough , Stop service until redis Service recovery , The request for current limiting is relatively mild , Ensure that some requests can be processed , Not one size fits all , However, it still depends on the specific business situation to select the appropriate processing scheme .
3、 ... and 、 Cache breakdown
( One ) What is it?
Cache breakdown usually occurs in high concurrency systems , A large number of concurrent users simultaneously request data that is not in the cache but in the database , That is, no data is read from the read cache at the same time , At the same time go to the database to get data , Causes the database pressure to increase instantaneously . Unlike the cache avalanche , Cache breakdown refers to concurrent query of the same data , The cache avalanche is that different data has expired , A lot of data can't be found to look up the database .
( Two ) Why?
In fact, the common reason for this situation is that a hot data cache has expired , Because it is hot data , The request concurrency is large , So when it expires, there will still be a large number of requests coming at the same time , It's too late to update the cache, and all the data will be sent to the database .
( 3、 ... and ) What do I do
There are two common solutions for this situation :
- Simply and brutally do not set the expiration time for hot data , This will not expire , Naturally, this will not happen , If you want to clean up later , It can be cleaned up in the background .
- Add mutex , That is, when it expires , In addition to the first query request, the lock request can be obtained from the database , And update it to the cache again , Others will be blocked , Until the lock is released , At the same time, the new cache has been updated , Subsequent requests will be requested to the cache , So there will be no cache breakdown .
Four 、 Cache penetration
( One ) What is it?
Cache penetration means that the data is not redis in , It's not in the database , This leads to each request coming , No correspondence found in cache key after , Every time I have to go to the database to query again , It turns out that the database doesn't either , Equivalent to two useless queries . In this way, the request can bypass the cache and directly query the database , If someone wants to attack the system maliciously , You can deliberately use null values or other nonexistent values for frequent requests , This will cause great pressure on the database .
( Two ) Why?
The reason for this phenomenon is well understood , In the business logic, if the user has not performed corresponding operations or processing on some information , There is no corresponding data in the corresponding database or cache for storing information , It is also prone to the above problems .
( 3、 ... and ) What do I do
For cache penetration , Generally, there are three treatment schemes :
- Restrictions on illegal requests , It mainly refers to parameter verification 、 Authentication verification, etc , Thus, a large number of illegal requests are intercepted from the beginning , This is a necessary means in actual business development .
- Cache null or default values , If the data cannot be fetched from the cache , In the database, there is no access to , Then we still cache the empty result , At the same time, set a shorter expiration time . The default value of this setting is stored in the cache , In this way, the second access to the cache will have a value , Instead of continuing to access the database , It can prevent a large number of malicious requests from repeatedly using the same key The attack .
- Use bloom filter to quickly determine whether the data exists . So what is a bloom filter , Simply speaking , You can introduce multiple independent hash functions , Ensure that under the given space and error rate , Complete element weight determination . Because we know , There is hash Such a situation as collision , If only one hash function , The probability of collision will obviously increase , To reduce this conflict , We can introduce more hash function , The core idea of Bloom filter algorithm is to use many different hash Function to resolve such a conflict . Its advantage is high space efficiency , Short query time , Far better than other algorithms , Its disadvantage is that there will be a certain error recognition rate , It does not fully guarantee that the requested key, Through the verification of the bloom filter , There must be this data , After all, there will still be conflicts in theory , No matter how small the probability . however , As long as it fails to pass the verification of the bloom filter , So this key It must not exist , As long as we use this point, we can filter out most of the nonexistent key The request of , It's enough under normal circumstances .
5、 ... and 、 other
In addition to the above three common Redis Cache exception problems , We often hear the terms cache warm-up and cache degradation , It's not so much an anomaly , It is better to say that there are two optimization methods .
( One ) Cache preheating
Cache preheating is before and after the system goes online , Load the relevant cache data directly into the cache system , Instead of relying on users . In this way, users can avoid , Query database first , And then cache the data . Users directly query the pre heated cache data , In this way, it can avoid the initial stage of the system , For high concurrency traffic , Will access the database , Pressure on database traffic . Depending on the magnitude of the data , There are several ways to do it :
- Not a lot of data : Load automatically when the project starts .
- Large amount of data : The background periodically refreshes the cache .
- Huge amount of data : Preload cache only for hot data .
( Two ) Cache degradation
Cache degradation refers to when the cache fails or there is a problem with the cache service , To prevent cache service failure , Cause the database to avalanche , So I don't access the database , But for some reason , Still want to ensure that the service is basically available , Although it will certainly be detrimental to the service . therefore , For unimportant cached data , We can adopt a service degradation strategy . There are two general practices :
- Direct access to the data cache of the memory part .
- Directly return to the default value set by the system .
6、 ... and 、 summary
This article focuses on the common Redis The cache exception and its handling scheme are summarized , You can use the figure below to make a summary :
Author's brief introduction :
Yinzhehoao , Tencent operation development engineer .
边栏推荐
- Kunlundb query optimization (I)
- [go] go mod mode, package 12import/add is not in goroot
- Leakcanary source code (2)
- Use smart doc to automatically generate interface documents
- 【ARM】讯为rk3568开发板lvds屏设置横屏显示
- 3dMax建模笔记(一):介绍3dMax和创建第一个模型Hello world
- Eslint simple configuration
- KunlunDB查询优化(二)Project和Filter下推
- DCC888 :SSA (static single assignment form)
- 一文简述:钓鱼攻击知多少
猜你喜欢

Redis缓存

OJ daily practice - filter extra spaces

SAP UI5 应用开发教程之一百零三 - 如何在 SAP UI5 应用中消费第三方库

美团基于 Flink 的实时数仓平台建设新进展

KunlunDB备份和恢复

DCC888 :SSA (static single assignment form)

Oracle ASM uses the CP command in asmcmd to perform remote replication

MySQL-Seconds_behind_master 的精度误差

每日刷题记录 (一)

Synchronization circuit and cross clock domain circuit design 2 -- cross clock domain transmission (FIFO) of multi bit signals
随机推荐
14. longest common prefix
Database daily question - day 20: selling products by date
MySQL8.0轻松完成GTID主从复制
[PHP] PHP polymorphism
SAP MM ME27 创建公司内STO单
声网多人视频录制与合成支持掉线再录制 | 掘金技术征文
DCC888 :SSA (static single assignment form)
OJ daily practice - find the first character that only appears once
每日刷题记录 (一)
Sword finger offer 07 Rebuild binary tree
使用GetX构建更优雅的Flutter页面结构
os. When the command line parameter in args[1:] is empty, the internal statement will not be executed
在Word中自定义多级列表样式
c# sqlsugar,hisql,freesql orm框架全方位性能测试对比 sqlserver 性能测试
再立云计算“昆仑”,联想混合云Lenovo xCloud凭什么?
Php7.3 error undefined function simplexml_ load_ string()
Good things to share
How to use enum data types
包管理工具--NPM、--CNPM、 --Yarn、 --CYarn
【首发】Redis系列2:数据持久化提高可用性