当前位置:网站首页>Redis cache and existing problems--cache penetration, cache avalanche, cache breakdown and solutions
Redis cache and existing problems--cache penetration, cache avalanche, cache breakdown and solutions
2022-08-05 08:07:00 【Wind wind】
Redis cache and existing problems - cache penetration, cache avalanche, cache breakdown and solutions
Redis cache
Cache is a buffer for data exchange, a temporary place to store data, and generally has high read and write performance.
Why use a Redis cache?Because the IO speed of ordinary disk-based databases is too slow compared to business needs, Redis (memory-based database) can be used as cache middleware to store frequently accessed data in the database into Redis.Improve database access speed to meet business needs.
Using Redis cache can reduce backend load, improve read and write efficiency, and reduce access time.
Use Redis for caching, generally as shown below:
When the client has a data request, it will first access Redis. If it exists in Redis, it will return directly. If it does not exist, it will request the database, and the data will write the returned data to Redis, which is convenient for the next access., and return the data to the client.
Cache Update Policy
- Low consistency requirements: Use the memory elimination mechanism that comes with Redis.
- High consistency requirements
Read operation: If the cache hits, return directly; if the cache misses, query the database, write to the cache, and set the timeout time.
Write operation: Write the database first, then update the cache.To ensure the atomicity of database operations and cache operations.
Notes on using Redis cache
Cache penetration
Cache penetration means that the data requested by the client does not exist in the cache and the database, so the cache will never take effect, and these requests will hit the database, and such requests will be continuously initiated, which will bring huge pressure to the database.
Solution:
1. Cache empty objects
For non-existent data, also create a cache in Redis, the value is empty, and set a relativelyShort TTL time.
Advantages: Simple implementation and easy maintenance;
Disadvantages: Additional memory consumption; there will be short-term data inconsistencies.
2. Bloom filtering
Using the Bloom filtering algorithm, first determine whether the request exists before entering Redis, and if it does not exist, the request is rejected directly.
Benefits: Low memory usage.
Disadvantages: Complex implementation; potential for misjudgment.
3. Other ways
- Do the basic format verification of the data;
- Enhanced user permission verification;
- Do a good job of limiting the current of hotspot parameters.
Cache Avalanche
At the same time, a large number of cache keys are invalid at the same time or the Redis service is down, resulting in a large number of requests reaching the database, bringing huge pressure.We can look for a solution based on the described problem.
Workaround:
- Add random values to the TTL of different keys;
- Using Redis Cluster to improve service availability;
- Add downgrade current limiting policy to cache service;
- Add multi-level cache to the business;
Cache breakdown
The cache breakdown problem is also called the hot key problem, which is a sudden failure of a key that is accessed by high concurrent access and the cache reconstruction service is more complicatedNow, countless requests to access will bring a huge impact to the database in an instant.
Solution:
1. Mutual exclusion lock
Because when the hotspot key suddenly expires, there will be multiple requests to access the database at the same time, withIt has a huge impact on the database, so only one request needs to access the database and rebuild the cache, and other requests can access the cache after the cache is rebuilt.This can be achieved by adding a mutex.That is, lock the cache reconstruction process to ensure that only one thread executes the reconstruction process and other threads wait.
Advantages: Simple implementation; no extra memory consumption; good consistency.
Disadvantages: Slow performance due to waiting; risk of deadlock.
2. Set the hotspot key to never expire, that is, set the logical expiration time.
The hotspot key cache never expires, but a logical expiration time must be set. When the data is queried, the logical expiration time is judged to determine whether the cache needs to be rebuilt.**Rebuilding the cache also guarantees single-threaded execution through mutex locks.Rebuilding the cache is performed asynchronously using a separate thread.Other threads do not need to wait, just query the old data directly.
Advantages: The thread does not need to wait, and the performance is better.
Disadvantages: No consistency guarantee (weak consistency); additional memory consumption; complex implementation.
边栏推荐
- Support touch screen slider carousel plugin
- C语言制作-QQ聊天室
- Redis数据库学习
- 图扑软件与华为云共同构建新型智慧工厂
- 微信 小程序 之PC端 不支持 wx.previewMedia 方法 故用自定义轮播图进行 模拟照片视频的播放
- uniapp time component encapsulates year-month-day-hour-minute-second
- D2--FPGA SPI interface communication2022-08-03
- Discourse 清理存储空间的方法
- The magic weapon for small entrepreneurs!
- 国家强制性灯具安全标准GB7000.1-2015
猜你喜欢
随机推荐
写出了一个CPU占用极高的代码后引发的思考
监听浏览器刷新操作
星座理想情人
Ethernet Principle
外企Office常用英语
【无标题】长期招聘硬件工程师-深圳宝安
Support touch screen slider carousel plugin
Fiddler tool explanation
JS语法使用
奇怪的Access错误
谷歌零碎笔记之MVCC(草稿)
Redis实现分布式锁-原理-问题详解
Codeforce 8.1-8.7做题记录
Unity—物理引擎+“武器模块”
P1103 书本整理
向美国人学习“如何快乐”
8.4模拟赛总结
随机码的生成
Use of thread pool (combined with Future/Callable)
SVG大鱼吃小鱼动画js特效