当前位置:网站首页>Redis core technology and practice - practice reading notes 20 ~ end
Redis core technology and practice - practice reading notes 20 ~ end
2022-06-10 03:33:00 【Tattoo_ Welkin】
List of articles
- 20 | After deleting data , Why is the memory usage still high ?
- 21 | buffer : One could trigger “ tragedy ” The place of ( Tentatively )
- 23 | Bypass cache :Redis How it works ?
- 24 | Replacement strategy : What if the cache is full ?( Memory obsolescence strategy )
- 25 | Cache exception ( On ): How to solve the problem of data inconsistency between cache and database ?
- 26 | Cache exception ( Next ): How to solve the cache avalanche 、 breakdown 、 Penetrate the problem ?
- 27 | The cache is contaminated , What should I do ?
- 29 | Atomic operation without lock :Redis How to deal with concurrent access ?
- 30 | How to use Redis Implement distributed locks ?
- 31 | Transaction mechanism :Redis It can be realized ACID Attribute ?( Tentatively )
- 32 | Redis Master slave synchronization and failover , What are the pit ?( Tentatively )
- 33 | Split brain : A strange data loss
- 36 | Redis What are the key technologies and practices supporting the second kill scenario ?
- 37 | Data distribution optimization : How to deal with data skew ?
- 39 | Redis 6.0 New features : Multithreading 、 Client caching and security
20 | After deleting data , Why is the memory usage still high ?
Main cause : Memory fragmentation
The main causes of memory fragmentation are :(1) Allocation mechanism (2) Key value pair size is not the same and delete operation
How to judge whether there is memory fragmentation ?
see INFO memory In the command mem_fragmentation_ratio Redis Current memory fragmentation rate index . mem_fragmentation_ratio Greater than 1 But less than 1.5. This situation is reasonable . But more than 1.5 The latter is not a reasonable category .
How to clean up memory fragments ?
Start debris cleaning .config set activedefrag yes
This command just enables the auto clean function , however , When exactly to clean up , Will be controlled by the following two parameters . These two parameters set a condition to trigger memory cleaning respectively , If both conditions are satisfied , Start cleaning up . In the process of cleaning , As long as one condition is not satisfied , Stop cleaning automatically .
21 | buffer : One could trigger “ tragedy ” The place of ( Tentatively )
23 | Bypass cache :Redis How it works ?
Read only cache and read write cache
- A read-only cache : The cache only handles reads , All write requests , It will be sent directly to the back-end database , Add, delete and modify in the database . For deleted data , If Redis The corresponding data has been cached , application You need to delete the cached data ,Redis There is no such data in . When an application When reading these data again , There will be a cache miss , The application will read the data from the database , And write it to the cache . thus , When these data are subsequently read , You can get it directly from the cache , It can speed up access .
- Read write buffer : The cache will be written . There are two strategies for writing , Pictured :

24 | Replacement strategy : What if the cache is full ?( Memory obsolescence strategy )

What specific experience do you have in use ?
- priority of use allkeys-lru Strategy . such , Can be fully utilized LRU The advantage of this classic caching algorithm , Leave the most recently accessed data in the cache , Improve the access performance of the application . If there is a clear distinction between hot and cold data in your business data , I suggest you use allkeys-lru Strategy .
- If the data access frequency in the business application is similar , There is no clear distinction between hot and cold data , It is recommended to use allkeys-random Strategy , Just randomly select the data to be eliminated .
- If there's a need to be at the top of your business , Like top news 、 Top video , that , have access to volatile-lru Strategy , At the same time, we don't set the expiration time for the top data . thus , The top data will never be deleted , And other data will be based on LRU Rules .
How to deal with obsolete data ?
Once the obsolete data is selected , If the data is unmodified , Then we'll delete ; If this data has been modified , We need to write it back to the database .
25 | Cache exception ( On ): How to solve the problem of data inconsistency between cache and database ?
A read-only cache
Several cases of inconsistency and solutions
First, recall the processing flow of read-only cache :
- Additions and deletions The request arrives at the database first for processing
- in the light of Abridgement The operation needs to delete the cached data in the cache separately
In view of the above situation, there will still be The inconsistency of the two operations , Then it's still You need to ensure that both the cache and the database are updated at the same time .
Read write buffer
Several cases of inconsistency in synchronous write through and their solutions
Synchronous write through needs to ensure that the cache and database are updated at the same time. therefore , We want to use transaction mechanism in business applications , Come onEnsure that the cache and database updates are atomic, in other words , The two are not updated together , Or don't update , Return error message , retry . otherwise , We can't achieve synchronous write through .
How to solve the problem of data inconsistency ?
the reason being that Ensure that the cache and database are updated at the same time . So there are only two permutations .
(1) First operation cache , Post operation database
(2) First operation database , Post operation cache
Do it first cache , Post operation database
The possible exceptions are :
solve : Cache delay double delete
In a thread A After updating the database values , We can let it first sleep For a short time , Do another cache delete operation .
Do it first database , Post operation cache

Such questions , Generally, it has little impact , You can ignore ~, In the follow-up work, we'll talk about it again .
summary

26 | Cache exception ( Next ): How to solve the cache avalanche 、 breakdown 、 Penetrate the problem ?

27 | The cache is contaminated , What should I do ?
29 | Atomic operation without lock :Redis How to deal with concurrent access ?
The main steps for the client to modify the data are :
- The client reads the data to the local server first
- Make changes locally ;
- Write back Redis
Call this process “ Read - modify - Write back to ” operation (Read-Modify-Write, Referred to as RMW operation ). When multiple clients execute on the same data RMW operation , We need to let RMW The code involved in the operation is executed atomically . Access the same data Of RMW Operation code , It's called the critical area code .
How to solve ?( Two ways )
- Atomic operation command INCR and DECR
- Lua Script
30 | How to use Redis Implement distributed locks ?
Based on a single Redis Nodes implement distributed locking
In fact, in the Redis Save a key:value That's it .
Locking consists of three operations ( Read lock variables 、 Judge the lock variable value and set the lock variable value to 1), These three operations need to ensure atomicity . How can we guarantee atomicity ?
Atomicity can be realized in two ways :
- Single command implementation
- Use Lua Script
Here you can directly use SETNX and DEL Command combination to implement lock adding and lock releasing operations
- SETNX: During execution, it will Determine whether the key value pair exists , If it doesn't exist , Set the value of the key value pair , If there is , Just don't make any settings .
- DEL: Delete lock variable
So what's the problem with this ?
Use SETNX and DEL There are two risks
- If a client is executing SETNX command 、 After lock up , Then an exception occurred while operating the shared data , result The last... Has not been implemented DEL Command to release the lock . therefore , The lock is always held by this client , Other clients cannot get the lock , You can't access shared data and perform subsequent operations , This will have an impact on business applications .
In response to this question , An effective solution is , Set an expiration time for the lock variable . thus , Even if the client holding the lock has an exception , Unable to actively release the lock ,Redis It also depends on the expiration time of the lock variable , After the lock variable expires , Delete it . Other clients after the lock variable expires , You can re request locking , This will not cause the problem of being unable to lock .
- Suppose the client A After execution SETNX Command successfully locked , And set a timeout for the lock 10s, And then start executing the business logic , However, due to other reasons, the execution time of business logic exceeded 10s , The lock is released automatically , Note that the client A The business logic of is still executing , At this point the client B Lock successfully and set the lock timeout , Then start to execute the business logic , But on the client side B During execution of business logic , client A completion of enforcement , Then start calling DEL Command release lock , This may cause the client to B The lock is released .
In essence , In fact, that is Be able to distinguish lock operations from different clients .
SETNX command , For non-existent key value pairs , It will first create and then set the value ( That is to say “ If it does not exist, it is set ”), In order to achieve and SETNX The same effect as command ,Redis to SET The command provides similar options NX, Used to implement “ If it does not exist, it is set ”. If used NX Options ,SET Command only if the key value pair does not exist , Will be set , Otherwise, the assignment operation will not be performed . Besides ,SET Commands can also be executed with EX or PX Options , Used to set the expiration time of key value pairs .
for instance , When executing the following command , Only key When there is no ,SET Will create key, Also on key Assign a value . in addition ,key The survival time is determined by seconds perhaps milliseconds Option value to determine .
SET key value [EX seconds | PX milliseconds] [NX]
therefore , Locking operation can :
// Lock , unique_value As the unique identification of the client
SET lock_key unique_value NX PX 10000
Unlocking can :
// KEYS[1] Express lock_key,ARGV[1] Is the unique identifier of the current client ,
// These two values are what we are executing Lua Passed in as a parameter when scripting .
// Release the lock Compare unique_value Whether it is equal or not , Avoid accidental release
if redis.call("get",KEYS[1]) == ARGV[1] then
return redis.call("del",KEYS[1])
else
return 0
end
because , The logic for releasing the lock operation also includes reading the lock variables 、 Judgment value 、 Multiple operations to delete lock variables . So we need to use Lua Script (unlock.script) The pseudo code of the release lock operation .redis-cli --eval unlock.script lock_key , unique_value
Based on multiple Redis Nodes realize highly reliable distributed locking
for fear of Redis The lock cannot work due to instance failure ,Redis The developer of the Antirez A distributed lock algorithm is proposed Redlock.
Redlock The basic idea of the algorithm , Is to let the client and multiple independent Redis Instances request locking in turn , If the client can successfully complete the locking operation with more than half of the instances , So we say , The client successfully obtained the distributed lock , Otherwise, locking fails . thus , Even if there is a single Redis Instance failed , Because lock variables are also saved on other instances , therefore , The client can still lock normally , Lock variables are not lost .
There are three steps :
- The client gets the current time .
- The clients are sent to... In order N individual Redis The instance performs a locking operation .
- Once the client is finished and all Redis Lock the instance , The client needs to calculate the total time of the whole locking process .

31 | Transaction mechanism :Redis It can be realized ACID Attribute ?( Tentatively )
32 | Redis Master slave synchronization and failover , What are the pit ?( Tentatively )
33 | Split brain : A strange data loss
The so-called brain fissure , It means in the master-slave cluster , There are two master nodes at the same time , They can all receive write requests . The most direct effect of cerebral fissure , That is, the client does not know which master node to write data to , As a result, different clients will write data to different master nodes . and , Serious words , Cleft brain can further lead to data loss .
So why does brain crack cause data loss ?
After the master-slave switch , Once the slave library is upgraded to the new master library , The Sentry will let the original main library execute slave of command , Re synchronize with the new master database . And in the Full amount of synchronization The final stage of implementation , The original master database needs to empty the local data , Load the data sent by the new master library RDB file , thus , The newly written data saved by the original master database during master-slave switching is lost .
Therefore, the original master database will lose the data saved during the switchover !
How to solve ?
Since the problem is that the original master database can still receive requests after a false failure , We start to find out whether there is a setting that restricts the master database from receiving requests in the configuration item of the master-slave cluster mechanism .
By looking for , We found that ,Redis Two configuration items have been provided to limit the request processing of the main library , Namely min-slaves-to-write and min-slaves-max-lag.
- min-slaves-to-write: This configuration item sets the minimum number of slave libraries that the master database can synchronize data ;
- min-slaves-max-lag: This configuration item sets the data replication between master and slave libraries , Send from the slave to the master ACK Maximum message delay ( In seconds ).

Suggestions for setting : Suppose there are from the library K individual , Can be min-slaves-to-write Set to K/2+1( If K be equal to 1, Set it to 1), take min-slaves-max-lag Set to more than ten seconds ( for example 10~20s), In this configuration , If more than half of the slave libraries and master Libraries ACK The message is delayed for more than ten seconds , We prohibit the main library from receiving client write requests .
36 | Redis What are the key technologies and practices supporting the second kill scenario ?
- Before the second kill : Users will Constantly refresh the product details page . Make the page elements of the product details page static , And then use CDN Or a browser Cache these static elements .
- Second kill : The user clicks the seckill button on the product details page , So the specific operation is : Inventory inspection 、 Inventory deduction and order processing , The pressure at this time is all Inventory inspection Operationally , So here is need Redis To improve performance .
Order processing can be performed in the database , but Inventory deduction operation , It cannot be handled by the back-end database . When the inventory inspection is completed , Once there is a surplus in stock , We'll be right there Redis Less inventory . and , In order to avoid the request to query the old inventory value , The two operations of inventory inspection and inventory deduction need to ensure atomicity .
- After the second kill : At this stage , Some users may refresh the product details page , Try to wait for another user to chargeback . The user who has successfully placed an order will refresh the order details , Track the progress of orders . however , The number of user requests in this phase has decreased a lot , The server side can generally support , Just ignore it .
37 | Data distribution optimization : How to deal with data skew ?
There are two types of data skew .
- The amount of data is skewed : In some cases , The data on the instance is unevenly distributed , There is a lot of data on an instance .
- Data access : Although the amount of data on each cluster instance varies little , But the data on an instance is hot data , Very frequently visited .
Causes and Countermeasures of data skew
- bigkey Cause to tilt : When we generate data at the business layer , want Try to avoid saving too much data in the same key value pair . Besides , If bigkey Exactly the collection type , We have another way , Is to put bigkey Split into many small collection type data , Scattered storage on different instances .
- Slot Uneven distribution leads to tilt : transfer slot
- Hash Tag Cause to tilt :(@TODO Temporary neglect )
Causes and Countermeasures of data access tilt
- Generally speaking , Hot data is mainly read by service , under these circumstances , We can use Multiple copies of hot data To deal with it .
The specific method of this method is , We copy hot data into multiple copies , In every copy of data key Add a random prefix to , Let it and other replica data not be mapped to the same Slot in . thus , There are multiple copies of hotspot data that can serve requests at the same time , meanwhile , These copies of data key It's different , Will be mapped to different Slot in . I'm giving these Slot When assigning instances , We should also pay attention to assigning them to different instances , that , The access pressure of hot data is distributed to different instances .

39 | Redis 6.0 New features : Multithreading 、 Client caching and security

边栏推荐
- 决策引擎系统 && 实时指标计算 && 风险态势感知系统 && 风险数据名单体系 && 欺诈情报体系
- JVM memory structure analysis (easy to understand)
- Will free price increases force young people back?
- Storage of signed and unsigned shaping in memory
- Unable to start web server
- 脚本bash
- Tensor programming
- vulnhub之hacksudo:Thor
- 阿里注册中心 Nacos 启动报错 Unable to start web server
- Refactoring technique --extract class
猜你喜欢

EasyExcel 实现动态导入导出

Refactoring method --extract method

Sword finger offer 24 Reverse linked list

IDE问题(一)微信开发者工具打不开
![[yolov3 loss function]](/img/79/87bcc408758403cf3993acc015381a.png)
[yolov3 loss function]

Wang Xing, Zhang Yong, Xu Lei, who can win the local e-commerce?

開源框架對Range模式的支持

播放暴增5000w,“打感情牌”已成平台新趋势

Nearly 1million people watched the early morning broadcast, and the Kwai knowledge anchor turned into "Friends of women"?

Dapr - what are the advantages of the microservice framework used by major manufacturers?
随机推荐
【L1、L2、smooth L1三类损失函数】
IDE problem (I) wechat developer tool cannot be opened
Text input, JS anti injection, web address recognition
When the most successful and worst CEO dies, Sony still follows his old path
Evolution of Architecture
134. gas station
Refactoring --rename
Monotone queue optimization DP example
135. distribute candy
【PyTorch预训练模型修改、增删特定层】
1110 区块反转
三个月GMV6000w+,盘点家纺行业打造爆款的关键
Keyword classification and the first C program
Huawei Hubble will add another IPO, and Maxon will rush to the scientific innovation board after more than ten years of dormancy
Basic data types and sizeof understanding
497. random points in non overlapping rectangles
【TFLite, ONNX, CoreML, TensorRT Export】
对《C语言深度解剖》(第2版)这本书的一个阅读笔记
Understand IPS and IDS
清晨开播近100万人观看,快手知识类主播化身“妇女之友”?