当前位置:网站首页>Core knowledge of distributed cache

Core knowledge of distributed cache

2022-07-07 00:12:00 chandfy

Write it at the front

This is the core knowledge of distributed cache , It can be used to review the eight part essay , It can also be used to deepen knowledge , It is suggested that you read in the mode of breaking through the barrier , Then check and fill in the gaps according to the content , Welcome to ask questions and learn from each other .

It's done before

  1. new edition javase Essential core knowledge , Click to learn
  2. Necessary core knowledge of concurrent programming , Click to learn
  3. Message queue of middleware
  4. Mysql Core knowledge
  5. http Core knowledge of the agreement
  6. Spring-Mybatis Core knowledge
    Each relevant article has the latest learning gains , Will make corresponding updates and modifications

Distributed cache core knowledge article knowledge overview
Why redis, Why not use other caches , such as memcached Well
Redis What are the common data structures ? What are the usage scenarios of these structures
redis A single thread , Why so soon? ?
redis Persistence of , And the difference between them
Common cache elimination strategies
Cache breakdown 、 through 、 The difference and solution of avalanche

Why redis, Why not use other caches , such as memcached Well

redis Data structure ratio memcached Richer , Basically, it can completely replace 
redis The community is more active , Performance is also powerful , It also supports functions such as persistence 
 The most important thing is to integrate with the business 

Redis What are the common data structures ? What are the usage scenarios of these structures

  1. String
    ordinary kv Storage , Use scenarios :
     Insert picture description here

  2. hash
    Store the object , One key There are multiple values , Use scenarios :
     Insert picture description here

  3. list
    Tabular data 、 Message queuing, etc , Use scenarios :
     Insert picture description here

  4. set
    unordered set 、 duplicate removal , intersection 、 And set etc. , For example, check your common friends , In terms of social relationships 、 Data duplication, etc. can be used

  5. sroted set

    Ordered set , duplicate removal , Make a list

redis A single thread , Why so soon? ?

 Memory based , Most requests are purely memory operations ,CPU No Redis Bottleneck ( Single thread reason )
 Avoid unnecessary CPU Context switching and other competitive conditions , Such as lock operation, etc 
 The bottom layer is the use of multiple channels I/O Reuse model , Non blocking IO
Redis6  Support multithreading after , But it is not enabled by default 

redis Persistence of , And the difference between them

 Support AOF and RDB Persistence 

AOF
     Log every write processed by the server 、 Delete operation , The query operation will not record , Record as text 
     Support second level persistence 、 Compatibility is good. , For the same number of data sets ,AOF Files are usually larger than RDB file , So recovery is better than RDB slow 

RDB
     Write the data set snapshot in memory to disk within the specified time interval , You can specify a time to archive data , But you can't do real-time persistence 
     The files are compact , Small volume , For disaster recovery ,RDB It's a very good choice , Compared with AOF Mechanism , If the data set is large ,RDB  Speed ratio when recovering large data sets  AOF  It's faster to recover 

Common cache elimination strategies

  1. fifo (FIFO)First In,First Out
    Newly accessed data insertion FIFO Queue tail , The data is in FIFO Move in sequence in the queue , Eliminate FIFO Data in the queue header

  2. Recently at least use (LRU) Least recently used
    The data is eliminated according to the historical access record of the data , If the data has been accessed recently , So the chances of being interviewed in the future are higher
    Insert new data into linked list header , Whenever cached data is accessed , Move the data to the head of the linked list , When the list is full , Discard the data at the end of the list .

  3. Not often used recently (LFU) Least Frequently Used
    Eliminate data based on its historical access frequency , If data has been accessed more than once , So the frequency of future visits will be higher
    Add data to the linked list , Sort by frequency , A data has been accessed , Take its frequency +1, When elimination occurs , Eliminate those with low frequency

Cache breakdown 、 through 、 The difference and solution of avalanche

 Cache breakdown  ( Some hot spot key Cache failed )
     There's no data in the cache, but there's data in the database , If it's hot data , that key At the moment the cache expires , There are a lot of requests at the same time , All of these requests go to DB, Cause instantaneous DB A lot of requests 、 Pressure builds up .
     The difference between cache avalanche and cache avalanche is that this is for a certain key cache , The latter are many key.
     The prevention of : Set hotspot data not to expire , Scheduled tasks regularly update the cache , Or set a mutex 

 Cache penetration ( Query data does not exist )
     Query a nonexistent data , Because the cache is not hit , And for the sake of fault tolerance , If initiated as id by “-1” Nonexistent data 
     If the data cannot be found from the storage layer, it will not be written to the cache, which will cause the nonexistent data to be queried from the storage layer every request , It loses the meaning of caching . There are a lot of data that does not exist for query , Probably DB It's gone , This is also a non-existent way for hackers to take advantage of key A way to attack applications frequently .
     The prevention of : Verification is added to the interface layer , Data rationality verification , Cache unreachable data , In the database, there is no access to , At this time, you can also key-value Write as key-null, Set a shorter expiration time , Prevent the same key Being attacked all the time 

 

 Cache avalanche  ( Multiple hotspots key It's all overdue )
     a large number of key Set the same expiration time , This causes all caches to fail at the same time , Cause instantaneous DB A lot of requests 、 The pressure surged , Cause an avalanche 
     The prevention of : The expiration time of stored data is set randomly , Prevent a large number of data expiration at the same time , Set hotspot data never to expire , Scheduled tasks are updated regularly 
原网站

版权声明
本文为[chandfy]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202131017370197.html