当前位置:网站首页>Redis memory elimination mechanism

Redis memory elimination mechanism

2022-06-26 00:06:00 Just put a flower in heaven and earth

Redis Reasons for memory obsolescence

Redis As a high-performance memory NoSQL database , Its capacity is limited by the maximum memory limit . When Redis When memory exceeds the physical memory limit , Memory data will be frequently exchanged with disk , send Redis Sharp performance degradation . At this time, how to eliminate useless data and free up space , Storing new data becomes particularly important .

Redis In the production environment , stay Redis When memory usage exceeds a certain value ( By configuring the parameters maxmemory To set up ) Will use elimination strategy . When the actual storage memory exceeds maxmemory Parameter values , Developers can use these methods ——Redis Memory retirement strategy , To decide how to make new space to continue to support reading and writing .

in addition , It should be noted that ,Redis For the use of memory, in addition to storing key value pairs , There will be additional memory overhead :

  • Junk data and expiration Key Occupied space
  • Dictionary progressive Rehash Resulting in space not deleted in time
  • Redis Management data , Including the underlying data structure overhead , Client information , Read / write buffer, etc
  • Master slave copy ,bgsave The extra cost of
  • Other

Redis Memory retirement strategy

  • noeviction: When memory usage exceeds configuration, an error is returned , It doesn't evict any keys
  • allkeys-lru: When you add a key , If it goes too far , First, through LRU The algorithm expels the longest unused key
  • volatile-lru: When you add a key, if it's too limited , First, evict the oldest unused key from the set of keys that have an expiration time
  • allkeys-random: When you add a key, if it's too limited , From all key Random delete
  • volatile-random: When you add a key, if it's too limited , Randomly expel... From the set of expired keys
  • volatile-ttl: Evict keys that are about to expire from keys that are configured with expiration time
  • volatile-lfu: Evict the least frequently used key from all keys with expiration time configured
  • allkeys-lfu: Eject the least frequently used key from all keys

If not set expire Of key, Not meeting the preconditions (prerequisites); that volatile-lru, volatile-random and volatile-ttl The act of strategy , and noeviction( Don't delete ) Basically the same .

We need to choose the appropriate expulsion strategy according to the characteristics of the system . Of course , In the process of running, the expulsion policy can also be set dynamically through commands , And pass INFO Command monitor cache miss and hit To tune .

Generally speaking :

  • If it is divided into thermal data and cold data , Recommended allkeys-lru Strategy . That is to say , Part of it key Often read and write . If you are not sure about the specific business characteristics , that allkeys-lru Is a good choice .
  • If you need to cycle through all the key, Or each key The frequency of visits is about the same , have access to allkeys-random Strategy , That is, the probability of reading and writing all elements is about the same .
  • If you want Redis according to TTL To filter out what needs to be deleted key, Please use volatile-ttl Strategy .
  • volatile-lru and volatile-random The main application scenario of the strategy is : Existing cache , And lasting key In the . Generally speaking , Scenes like this , Two separate... Should be used Redis example .

It is worth mentioning that , Set up expire Will consume extra memory , So use allkeys-lru Strategy , More efficient use of memory , Because in this way, you can no longer set the expiration time .

Redis Memory obsolescence process

  • The client initiates an application that requires more memory .
  • Redis Check memory usage , If the actual use of memory has exceeded maxmemory,Redis It will select the useless ones according to the elimination strategy configured by the user key;
  • Confirm that the selected data is OK , Successfully perform the elimination task

Redis in LRU The implementation of the

Redis One was maintained 24 Bit clock , It can be simply understood as the timestamp of the current system , The clock will be updated at regular intervals . Every key Object also maintains a 24 Bit clock , When adding key Object will assign the system clock to the internal object clock . For example, I'm going to do LRU, So first get the current global clock , Then find the one with the longest distance between the internal clock and the global clock ( The biggest difference ) To eliminate , It's worth noting here that the global clock is only 24 position , It can be stored in seconds 194 God , So there may be key The clock of is larger than the global clock , If this happens, then add two instead of subtracting to find the longest key.

struct redisServer {
       pid_t pid; 
       char *configfile; 
       // Global clock 
       unsigned lruclock:LRU_BITS; 
       ...
};
typedef struct redisObject {
    unsigned type:4;
    unsigned encoding:4;
    /* key Object internal clock  */
    unsigned lru:LRU_BITS;
    int refcount;
    void *ptr;
} robj;

Redis Medium LRU And conventional LRU The implementation is not the same , routine LRU Will eliminate the elements of U-turn accurately , however Redis Of LRU The queue is not maintained , Just according to the configuration strategy or from all key Choose... At random N individual (N You can configure the ) Either set the expiration time from all of the key Selected from N Key , And then from here N Choose the one that hasn't been used for the longest time key To eliminate .
Here's the general LRU Elimination strategy and Redis A comparison of one key elimination strategies based on random samples , Light gray indicates the key that has been deleted , Dark gray indicates keys that have not been deleted , Green indicates the new key , The higher the key, the longer it takes to join . As you can see from the diagram , stay redis 3 in , Set the number of samples to 10 Can be very accurate when the elimination of the longest unused key , And Convention LRU flat .
 Insert picture description here

Redis in LFU The implementation of the

LFU Is in Redis4.0 After that ,LRU The least recent use of is actually not accurate , Consider the following , If in | Delete , that A The longest distance , But actually A It's used more often than B frequent , So the reasonable elimination strategy should be elimination B.LFU It was born to deal with this situation .
 Insert picture description here
LFU Original key Object's internal clock 24 Bits are divided into two parts , front 16 Bits also represent clocks , after 8 Bits represent a counter .16 In the case of bits, if the unit is still in seconds, it will lead to insufficient use , So it's usually based on the clock . Then 8 Bit represents the current key The frequency of access to the object ,8 Bits can only represent 255, however redis There's no linear rise , It's through a complex formula , Adjust the incremental speed of data by configuring two parameters .
The figure below shows from left to right key The number of hits , From top to bottom, the influence factors , When the influence factor is 100 Under the condition of , after 10M The second hit is the last one 8 The bit value is filled up to 255.
 Insert picture description here

 uint8_t LFULogIncr(uint8_t counter) {
      if (counter == 255) return 255;
      double r = (double)rand()/RAND_MAX;
      double baseval = counter - LFU_INIT_VAL;
      if (baseval < 0) baseval = 0;
      double p = 1.0/(baseval*server.lfu_log_factor+1);
      if (r < p) counter++;
      return counter;
  }
lfu-log-factor 10
lfu-decay-time 1

The above situation is key Always being hit , If one key After a few minutes, I was not hit , Then 8 The value of the bit is decremented by a few minutes , Decrease by several minutes according to the attenuation factor lfu-decay-time To control

unsigned long LFUDecrAndReturn(robj *o) {
    unsigned long ldt = o->lru >> 8;
    unsigned long counter = o->lru & 255;
    unsigned long num_periods = server.lfu_decay_time ? LFUTimeElapsed(ldt) / server.lfu_decay_time : 0;
    if (num_periods)
        counter = (num_periods > counter) ? 0 : counter - num_periods;
    return counter;
}
lfu-log-factor 10
lfu-decay-time 1

The above incremental and attenuation have corresponding parameter configurations , So for the newly allocated key Well ? If the new allocation key The counter starts with 0, Then it is very likely to be eliminated when the memory is insufficient , So by default, the newly allocated key After 8 The value of the bit counter is 5( Should be configurable ), Prevent from being deleted directly due to low access frequency .

low 8 We have finished the description , So high 16 What is a bit clock for ? At present, my understanding is to use low attenuation 8 Bit counter , Compare this clock with the global clock , If after a certain time ( Do a bad job ) The counter will be attenuated .
Last ,redis The internal clock will be minimized key To eliminate ( Minimum means least frequently used ), Note that this process also selects keys randomly according to the policy

Something to be aware of

  • Don't put garbage data , Clean up useless data in time
  • Experimental data and offline business data shall be deleted in time ;
  • key Try to set the expiration time
    For those with timeliness key Set expiration time , adopt redis Its own expiration key Clean up policies to reduce expiration key For the occupation of memory , At the same time, it can also reduce the trouble of business , There is no need for regular manual cleaning .
  • single Key Don't be too big
    I encountered a single problem when troubleshooting the user string Of value Yes 43M Of , There is also a list 100 More than ten thousand big members account for 1G Multi memory . such key stay get The network transmission delay will be relatively large , The output buffer to be allocated is also relatively large , It is also easy to cause high delay when cleaning regularly . It's best to split the business , Data compression and other methods to avoid this excessive key The birth of .
  • If different services share one service , It's best to use different logic db Separate
    From the above analysis, we can see that ,Redis The expiration date of Key Both the clean-up policy and the forced elimination policy will traverse each node db. take key It's distributed in different db Help expire Key Clean up in time . In addition, different businesses use different db It is also helpful for troubleshooting and timely offline of useless data .
原网站

版权声明
本文为[Just put a flower in heaven and earth]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/177/202206252118150671.html