当前位置:网站首页>Reasons and solutions of redis cache penetration and avalanche

Reasons and solutions of redis cache penetration and avalanche

2022-07-05 00:28:00 ZZYSY~

High availability of services

Redis The use of caching , Greatly improve the performance and efficiency of the application , Especially in data query . But at the same time , It also brings some problems . among , The most important question , It's data consistency , Strictly speaking , There is no solution to this problem . If there is a high requirement for data consistency , Then you can't use caching .

Other typical problems are , Cache penetration 、 Cache avalanche and cache breakdown . at present , There are also popular solutions in the industry .

Cache penetration ( We can't find it )

 Insert picture description here

Concept

The concept of cache penetration is simple , The user wants to query a data , Find out redis Memory databases don't have , That is, cache miss , So query the persistence layer database . There is no , So this query failed . When there are a lot of users , No cache hits , So they all went to the persistence layer database , This will put a lot of pressure on the persistence layer database , This is equivalent to cache penetration .

Solution

  • The bloon filter

A bloom filter is a data structure , For all possible query parameters, use hash stored , Check at the control level first , If not, discard , Thus, the query pressure on the underlying storage system is avoided
 Insert picture description here

  • Caching empty objects

When the storage tier misses , Even empty objects returned are cached , At the same time, an expiration time will be set , Then accessing this data will get from the cache , Protected back-end data sources
 Insert picture description here
But there are two problems with this approach :

  1. If null values can be cached , This means that the cache needs more space to store more keys , Because there may be a lot of empty keys ;
  2. Even if expiration time is set for null value , There will be some inconsistency between the data of cache layer and storage layer for a period of time , This has an impact on businesses that need to be consistent .

Cache breakdown ( Too much , Cache expiration )

summary

We need to pay attention to the difference between this and cache breakdown , Cache breakdown , It means a key Very hot , Constantly carrying big concurrency , Large concurrent centralized access to this point , When this key At the moment of failure , Continuous large concurrency breaks through the cache , Direct request database , It's like cutting a hole in a barrier
When a key At the moment of expiration , There are a lot of requests for concurrent access , This kind of data is generally hot data , Due to cache expiration , Will access the database at the same time to query the latest data , And write back to the cache , Will cause the database transient pressure is too large .

Solution

  • Never expired data settings

At the cache level , Expiration time is not set , So there will be no hot spots key Problems after expiration .

  • Add mutex lock

Distributed lock ∶ Using distributed locks , Guarantee for each key At the same time, there is only one thread to query the back-end service , Other threads do not have access to distributed locks , So just wait . In this way, the pressure of high concurrency is transferred to distributed locks , So the test of distributed locks is great .
 Insert picture description here

Cache avalanche

Concept

Cache avalanche , At a certain time , Expiration in cache set .Redis Downtime

One of the reasons for the avalanche , For example, when writing this article , It's about double twelve o'clock , There will soon be a rush , This wave of commodity time is put into the cache , Suppose you cache for an hour . Then at one o'clock in the morning , The cache of this batch of goods has expired . And the access to this batch of products , It's all in the database , For databases , There will be periodic pressure peaks . So all the requests will reach the storage layer , The number of calls to the storage layer will skyrocket , Cause the storage layer to hang up
 Insert picture description here
Actually, the concentration is overdue , It's not very deadly , More deadly cache avalanche , It means that a node of the cache server is down or disconnected . Because of the cache avalanche formed naturally , Cache must be created in a certain time period , This is the time , The database can also withstand the pressure . It's just periodic pressure on the database . The cache service node is down , The pressure on the database server is unpredictable , It's likely to crush the database in an instant .
( A double tenth : Shut down some services , Ensure availability of major services )

Solution

  • redis High availability

The meaning of this idea is , since redis It's possible to hang up , I'll add more redis, After this one goes down, others can continue to work , In fact, it's a cluster built .( Different live )

  • Current limiting the drop

The idea of this solution is , After cache failure , Control the number of threads that read the database write cache by locking or queuing . For example, to some key Only one thread is allowed to query data and write cache , Other threads wait .

  • Data preheating

Data heating means before deployment , I'll go through the possible data first , In this way, some of the data that may be accessed in large amounts will be loaded into the cache . Manually trigger loading cache before large concurrent access occurs key, Set different expiration times , Make the cache failure time as uniform as possible .

原网站

版权声明
本文为[ZZYSY~]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202141120045652.html