当前位置:网站首页>Project practice of data caching using redis

Project practice of data caching using redis

2022-06-10 02:18:00 1024 Q

Catalog

1. introduction

2. Add information to cached business processes

3. Implementation code

3.1 Code implementation ( Add information to the cache )

3.2 Cache update strategy

3.3 Realize active update

4. Cache penetration

4.1 Solve cache penetration ( Use an empty object to solve )

5. Cache avalanche

6. Cache breakdown

6.1 Mutex code

6.2 Logical expiration implementation

1. introduction

What's the use of caching ?

Reduce requests to the database , Reduce server pressure

Improved reading and writing efficiency

What are the disadvantages of caching ?

How to ensure the data consistency between the database and the cache ?

Maintain cached code

Caching is generally built in the form of clusters , The cost of operation and maintenance

2. Add information to cached business processes

The picture above can clearly understand Redis Where in the project , It is a middleware between database and client , It's also the umbrella of the database . With Redis It can help the database block requests , Prevent requests from entering the database directly , Increased response rate , It greatly improves the stability of the system .

3. Implementation code

The following code will be written according to the query shop information as the background , The specific flow chart is shown above .

3.1 Code implementation ( Add information to the cache )public static final String SHOPCACHEPREFIX = "cache:shop:"; @Autowired private StringRedisTemplate stringRedisTemplate; // JSON Tools ObjectMapper objectMapper = new ObjectMapper(); @Override public Result queryById(Long id) { // from Redis Query store cache String cacheShop = stringRedisTemplate.opsForValue().get(SHOPCACHEPREFIX + id); // Determine whether the data in the cache exists if (!StringUtil.isNullOrEmpty(cacheShop)) { // If it exists in the cache, it directly returns try { // Convert substring to object Shop shop = objectMapper.readValue(cacheShop, Shop.class); return Result.ok(shop); } catch (JsonProcessingException e) { e.printStackTrace(); } } // There is no... In the cache , Then query the data from the database Shop shop = getById(id); // It doesn't exist in the database , return 404 if (null==shop){ return Result.fail(" Information does not exist "); } // There are... In the database , Then write the information to Redis try { String shopJSon = objectMapper.writeValueAsString(shop); stringRedisTemplate.opsForValue().set(SHOPCACHEPREFIX+id,shopJSon,30,TimeUnit.MINUTES); } catch (JsonProcessingException e) { e.printStackTrace(); } // return return Result.ok(shop); }3.2 Cache update strategy

Database and cache data consistency issues , When the database information is modified , How should cached information be handled ?

  Memory culling timeout culling active update
explain You don't need to maintain it yourself , utilize Redis The elimination mechanism of data elimination Add... To the cached data TTL Write business logic , Update the cache while modifying the database
Uniformity Bad commonly good
Maintenance cost nothing low high

In fact, it needs to be selected according to the business scenario

High consistency : Choose active update

Low consistency : Memory elimination and timeout elimination

3.3 Realize active update

At this time, the consistency between database and cache needs to be realized , There are still many questions worth pondering

Delete cache or update cache ?
When the database changes , How do we deal with invalid data in the cache , Delete it or update it ?
Update cache : The cache is updated every time the database is updated , There are many invalid write operations
Delete cache : Delete cache when updating database , Add cache when querying
thus it can be seen , Choosing to delete the cache is efficient .

How to ensure the success or failure of cache and database operations at the same time ?
Monomer architecture : Transaction resolution is adopted in single architecture
Distributed architecture : Using distributed solutions

Delete the cache first or operate the database first ?

In the case of concurrency , The above situation is highly likely to happen , This will cause the cache to be inconsistent with the database .

Database first , In the case of operation cache , Caching data TTL When it just expires , Appear one A Thread query cache , Because there is no data in the cache , Then query the database , During this period, there is another B Thread updates and deletes the database , When B The operation of A When the two operations are completed , It will also cause inconsistency between the database and the cached data .

Be finished !!! Both schemes will cause database and cache consistency problems , So how to choose ?

Although both schemes will cause problems , But in terms of probability, we still operate the database first , There is a lower probability of problems in deleting the cache , So you can choose to operate the database first , Then delete the cache scheme .

Personal opinion :
If we operate the database first , Then delete the thread in the cache scheme B When deleting the cache , We make use of java To delete the cache will have Boolean Return value , If it is false, Then the cache no longer exists , The cache does not exist , Then the situation in the above figure will appear , So can we delete the cache according to the Boolean Value to determine whether a thread is required B To add the cache ( Because it was the thread that needed to query to add the cache , Consider threads here B To add a cache , Threads B Is the cache of the operation database ), If the thread B Threads are also being added A Writing to the cache before will also cause consistency problems between the database and the cache . Then whether it can be delayed for a period of time ( for example 5s,10s) Then add the data , This will eventually unify the consistency between the database and the cache , But if it's here 5s,10s There are threads inside C,D Wait to access the cache ?C,D The thread still accessed invalid cache information .
So on the consistency between database and cache , Unless the relevant request is denied to the server for access before writing to the correct cache, the user can avoid accessing the error message , But rejecting a request is fatal to the user , It is very likely that users will directly give up using the application , So we can only minimize the possibility of problems .( Personal understanding , If you have questions, you can leave a message in the comment area )

@Override @Transactional public Result updateShop(Shop shop) { Long id = shop.getId(); if (null==id){ return Result.fail(" The store id Can't be empty "); } // Update the database boolean b = updateById(shop); // Delete cache stringRedisTemplate.delete(SHOPCACHEPREFIX+shop.getId()); return Result.ok(); }4. Cache penetration

Cache penetration means that the data requested by the client does not exist in the cache or in the database , This cache will never take effect , These requests will all go to the database .

Solution :

Caching empty objects

shortcoming :

Space waste

If an empty object is cached , Within the validity period of an empty object , We have added the same as empty objects in the database in the background id The data of , This will cause consistency problems between the database and the cache

The bloon filter

advantage :

Less memory usage

shortcoming :

The implementation is complex

Possibility of misjudgment ( The existing data will certainly judge success , But nonexistent data may also be released , There is a chance to cause cache penetration )

4.1 Solve cache penetration ( Use an empty object to solve )public static final String SHOPCACHEPREFIX = "cache:shop:"; @Autowired private StringRedisTemplate stringRedisTemplate; // JSON Tools ObjectMapper objectMapper = new ObjectMapper(); @Override public Result queryById(Long id) { // from Redis Query store cache String cacheShop = stringRedisTemplate.opsForValue().get(SHOPCACHEPREFIX + id); // Determine whether the data in the cache exists if (!StringUtil.isNullOrEmpty(cacheShop)) { // If it exists in the cache, it directly returns try { // Convert substring to object Shop shop = objectMapper.readValue(cacheShop, Shop.class); return Result.ok(shop); } catch (JsonProcessingException e) { e.printStackTrace(); } } // Because the above judge cacheShop Is it empty , If you enter this method, it must be empty , Direct filtration , No access to the database if (null != cacheShop){ return Result.fail(" Information does not exist "); } // There is no... In the cache , Then query the data from the database Shop shop = getById(id); // It doesn't exist in the database , return 404 if (null==shop){ // Caching empty objects stringRedisTemplate.opsForValue().set(SHOPCACHEPREFIX+id,"",2,TimeUnit.MINUTES); return Result.fail(" Information does not exist "); } // There are... In the database , Then write the information to Redis try { String shopJSon = objectMapper.writeValueAsString(shop); stringRedisTemplate.opsForValue().set(SHOPCACHEPREFIX+id,shopJSon,30,TimeUnit.MINUTES); } catch (JsonProcessingException e) { e.printStackTrace(); } // return return Result.ok(shop); }

The above scheme is a passive scheme after all , We can take some initiatives , for example

to id Additive complexity

jurisdiction

Current limiting of hot spot parameters

5. Cache avalanche

Cache avalanche is when A large number of caches at the same time key Simultaneous failure perhaps Redis The service outage , Cause a large number of requests to reach the database , It brings a lot of pressure .

Solution :

To different Key Of TTL Add random values
a large number of Key Simultaneous failure , Most likely TTL identical , We can give... At random TTL

utilize Redis Clusters improve service availability

Add a downgrading and current limiting policy to the cache service

Add multi-level cache to the business

6. Cache breakdown

The cache breakdown problem is also called hotspot Key problem , It is a highly concurrent access and complex cache reconstruction business key All of a sudden it failed , Countless requests for access will have a huge impact on the database in an instant .

Common solutions :

The mutex

Logical expiration

The mutex :

That is, lock is used to ensure that there is only one thread to rebuild the cached data , The other threads that can't get the lock sleep for a period of time, and then start again to perform the steps of query cache

advantage :

No additional memory consumption ( For the following logical expiration scheme )

It ensures consistency

shortcoming :

Thread needs to wait , Performance is affected

There may be a deadlock

Logical expiration :

Logical expiration is to add an additional attribute to the cached data , This property is the property of logical expiration , Why use this to determine whether to expire without using TTL Well ? Because use TTL Words , Once expired , You can't get the data in the cache , If the thread does not get the lock, there is no old data to return .

The biggest difference between it and mutex lock is that there is no thread waiting , Whoever gets the lock first will rebuild the cache , Other threads return old data without obtaining the lock , Don't sleep , Polling to get locks .

Rebuilding the cache will open a new thread to rebuild the cache , The purpose is to reduce the response time of the thread grabbing the lock .

advantage :

Threads do not need to wait , Good performance

shortcoming :

Consistency cannot be guaranteed

There is additional memory consumption in the cache

The implementation is complex

The two schemes have their own advantages and disadvantages : One ensures consistency , One ensures availability , The choice mainly depends on the needs of the business , Focus on availability or consistency .

6.1 Mutex code

What is the lock of a mutex ?

Use Redis Ordered setnx command .

First, implement the code to obtain and release the lock

/** * Attempt to acquire lock * * @param key * @return */ private boolean tryLock(String key) { Boolean flag = stringRedisTemplate.opsForValue().setIfAbsent(key, "1", 10, TimeUnit.SECONDS); return BooleanUtil.isTrue(flag); } /** * Delete lock * * @param key */ private void unLock(String key) { stringRedisTemplate.delete(key); }

Code implementation

public Shop queryWithMutex(Long id) throws InterruptedException { // from Redis Query store cache String cacheShop = stringRedisTemplate.opsForValue().get(SHOPCACHEPREFIX + id); // Determine whether the data in the cache exists if (!StringUtil.isNullOrEmpty(cacheShop)) { // If it exists in the cache, it directly returns try { // Convert substring to object Shop shop = objectMapper.readValue(cacheShop, Shop.class); return shop; } catch (JsonProcessingException e) { e.printStackTrace(); } } // Because the above judge cacheShop Is it empty , If you enter this method, it must be empty , Direct filtration , No access to the database if (null != cacheShop) { return null; } Shop shop = new Shop(); // Cache breakdown , Get the lock String lockKey = "lock:shop:" + id; try{ boolean b = tryLock(lockKey); if (!b) { // Lock acquisition failed Thread.sleep(50); return queryWithMutex(id); } // There is no... In the cache , Then query the data from the database shop = getById(id); // It doesn't exist in the database , return 404 if (null == shop) { // Caching empty objects stringRedisTemplate.opsForValue().set(SHOPCACHEPREFIX + id, "", 2, TimeUnit.MINUTES); return null; } // There are... In the database , Then write the information to Redis try { String shopJSon = objectMapper.writeValueAsString(shop); stringRedisTemplate.opsForValue().set(SHOPCACHEPREFIX + id, shopJSon, 30, TimeUnit.MINUTES); } catch (JsonProcessingException e) { e.printStackTrace(); } }catch (Exception e){ }finally { // Release the mutex unLock(lockKey); } // return return shop; }6.2 Logical expiration implementation

Logical expiration is not set TTL

Code implementation

@Datapublic class RedisData { private LocalDateTime expireTime; private Object data;}

Because it's a hot spot key, therefore key Basically, they are manually imported into the cache , The code is as follows

/** * The logical expiration time object is written to the cache * @param id * @param expireSeconds */ public void saveShopToRedis(Long id,Long expireSeconds){ // Query store data Shop shop = getById(id); // Encapsulated as logical expiration RedisData redisData = new RedisData(); redisData.setData(shop); redisData.setExpireTime(LocalDateTime.now().plusSeconds(expireSeconds)); // write in Redis stringRedisTemplate.opsForValue().set(CACHE_SHOP_KEY+id, JSONUtil.toJsonStr(redisData)); }

Logical expiration code implementation

/** * Cache breakdown : Logical expiration resolution * @param id * @return * @throws InterruptedException */ public Shop queryWithPassLogicalExpire(Long id) throws InterruptedException { //1. from Redis Query store cache String cacheShop = stringRedisTemplate.opsForValue().get(SHOPCACHEPREFIX + id); //2. Determine whether the data in the cache exists if (StringUtil.isNullOrEmpty(cacheShop)) { // 3. non-existent return null; } // 4. There is , Judge whether it is overdue RedisData redisData = JSONUtil.toBean(cacheShop, RedisData.class); JSONObject jsonObject = (JSONObject) redisData.getData(); Shop shop = JSONUtil.toBean(jsonObject, Shop.class); LocalDateTime expireTime = redisData.getExpireTime(); // 5. Judge whether it is overdue if (expireTime.isAfter(LocalDateTime.now())){ // 5.1 Not expired return shop; } // 5.2 Has expired String lockKey = "lock:shop:"+id; boolean flag = tryLock(lockKey); if (flag){ // TODO Lock acquired successfully , Turn on independent threads , Realize cache reconstruction , It is recommended to use thread pool CACHE_REBUILD_EXECUTOR.submit(()->{ try { // Rebuild cache this.saveShopToRedis(id,1800L); }catch (Exception e){ }finally { // Release the lock unLock(lockKey); } }); } // Lock acquisition failed , Return expired information return shop; } /** * Thread pool */ private static final ExecutorService CACHE_REBUILD_EXECUTOR = Executors.newFixedThreadPool(10);

This is about using Redis This is the end of the article on the project practice of data caching , More about Redis For data cache content, please search the previous articles of SDN or continue to browse the relevant articles below. I hope you can support SDN more in the future !


原网站

版权声明
本文为[1024 Q]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/160/202206091504481447.html