当前位置:网站首页>Redis (I) Internal data structure

Redis (I) Internal data structure

2022-06-12 10:02:00 A god given dream never wakes up

Redis( One ). Internal data structure

Different from others key-value database ,Redis Support many data structures , And provide many powerful... On these data structures API, these API By Redis Internal efficient data structure and corresponding function support ; In short, we use Redis The order is made by Redis Internal data structure and corresponding function support ;

1. Simple dynamic string Sds

1.1 application

Sds:Simple Dynamic String,Redis The string used at the bottom , Not at all C String default char*;

char* The type has a single function , It can meet the requirements in most cases , however , It doesn't support length calculation and appending efficiently (append) These two operations : Length calculation O(n), Every time you add append They all need to reallocate memory

1.2 Realization

sds The simple implementation of is

typedef char *sds;
struct sdshdr {
    
// buf  Occupied length 
int len;
// buf  Remaining usable length 
int free;
//  Where string data is actually stored 
char buf[];
};

Normal representation string nihao The object of is

struct sdshdr {
    
  len = 5;
  free = 0;
  buf = "nihao\0";
};

O(1) Calculate the length , free Fields are used for additional optimizations ,

Example :

set testkey nihao
struct sdshdr {
    
  len = 5;
  free = 0;
  buf = "nihao\0";
};
append testkey nihaonihao
struct sdshdr {
    
  len = 15;
  free = 15;
  buf = "nihaonihaonihao\0";
};

perform APPEND after ,Redis by buf Created more than twice the size of the required space

append testkey sssss
struct sdshdr {
    
  len = 20;
  free = 10;
  buf = "nihaonihaonihao\0";
};

The length of the additional content shall not exceed free The value of the property , Then there is no need to buf Reallocate memory

sds _sdsMakeRoomFor(sds s, size_t addlen, int greedy) {
.......

    /* Return ASAP if there is enough space left. */ The length of the additional content shall not exceed  free  The value of the property , Then there is no need to  buf  Reallocate memory 
    if (avail >= addlen) return s;

    len = sdslen(s);
    sh = (char*)s-sdsHdrSize(oldtype);
    #  Calculate the total length of the new string 
    reqlen = newlen = (len+addlen);
    assert(newlen > len);   /* Catch size_t overflow */
    #  If the total length of the new string is less than  SDS_MAX_PREALLOC
    #  Then assign the string  2  Space times the required length 
    #  Otherwise, allocate the required length plus  SDS_MAX_PREALLOC  Amount of space 
    if (greedy == 1) {
        if (newlen < SDS_MAX_PREALLOC)
            newlen *= 2;
        else
            newlen += SDS_MAX_PREALLOC;
    }
......
    usable = usable-hdrlen-1;
    if (usable > sdsTypeMaxSize(type))
        usable = sdsTypeMaxSize(type);
    #  Allocate memory 
    sdssetalloc(s, usable);
    return s;
}

SDS_MAX_PREALLOC The maximum is 1024*1024 = 1M , If you reallocate the size, it doubles , If the expanded string is greater than 1M Just allocate more 1M,free = 1M,

It doesn't waste memory , Only use append The command string has such a strategy ;

1.3 summary

summary :Redis Use Sds , No C Of char* ; advantage yes Sds Have more efficient length calculation , Efficient append , Binary security , shortcoming It may waste some memory , And the memory will not be released automatically ;

2. Double ended linked list

Commonly used data structures , It's like an array , But it's richer than array functions

2.1 application

RPUSHLPOP or LLEN When ordered , The bottom operation of the program is a double ended linked list ; The client uses Lists besides
• The transaction module uses a double ended linked list to store the input commands in order ;
• The server module uses a double ended linked list to store multiple clients ;
• subscribe / The sending module uses a double ended linked list to store multiple clients of the subscription mode ;
• The event module uses a double ended linked list to store time events (time event);

2.2 Realization

Redis The list uses two data structures as the underlying implementation :

  1. Double ended linked list

  2. Compressed list

There are mainly two data structures ListNode and List

 Insert picture description here

node ListNode

typedef struct listNode {
//  Precursor node 
struct listNode *prev;
//  The subsequent nodes 
struct listNode *next;
//  value  void *  Indicates that the data saved by the node does not limit the data type 
void *value;
} listNode;

List The linked list itself

typedef struct list {
//  The head pointer 
listNode *head;
//  Tail pointer 
listNode *tail;
//  Number of nodes 
unsigned long len;
//  Copy function 
void *(*dup)(void *ptr);
//  Release function 
void (*free)(void *ptr);
//  Alignment function 
int (*match)(void *ptr, void *key);
} list;

2.3 summary

summary

Redis I realized the linked list

The definition of the two data structures of the linked list , Available performance characteristics

  • Bidirectional traversal
  • Insert both the head and the tail O(1)
  • The complexity of calculating the length O(1)

3. Dictionaries

Dictionaries Also known as mapping map , By a set of key value pairs (key-value pairs) form

3.1 application

  1. Realize database key space (key space); such as

    ##  Clear key space   All key value pairs 
    FLUSHDB 
    ##  Get key space   All key value pairs 
    DBSIZE
    ##  Set a string key to the key space 
    set key value
    
  2. Used as a Hash One of the underlying implementations of type keys

    Implementation is a dictionary + Compressed list ( Be similar to Java Of HashMap)

127.0.0.1:6379[2]> hset fam lp lhh
(integer) 1
127.0.0.1:6379[2]> hset fam lg sff
(integer) 1
127.0.0.1:6379[2]> hgetall fam
1) "lp"
2) "lhh"
3) "lg"
4) "sff"
127.0.0.1:6379[2]>

3.2 Realization

There are many ways to do this

  • The simplest ones are linked lists and arrays , The limiting condition is that the number of elements is small ;
  • Both efficiency and simplicity , You can use a hash table
  • In pursuit of stable performance, a balanced tree can be used

Redis The efficient and simple hash table is chosen as the underlying implementation of the dictionary

Dictionary implementation

/*
*  Dictionaries 
**  Each dictionary uses two hash tables , For progressive  rehash
*/
typedef struct dict {
//  Type specific processing functions 
dictType *type;
//  Type handles the private data of the function 
void *privdata;
//  Hashtable (2  individual ) ,0  Hash table No (ht[0]) Is the hash table mainly used by the dictionary , and  1  Hash table No (ht[1]) Only when the program is right  0  Hash table No  rehash  Use only when 
dictht ht[2];
//  Record  rehash  A sign of progress , The value is -1  Express  rehash  Not in progress 
int rehashidx;
//  The number of security iterators currently in operation 
int iterators;
} dict

The dictionary uses Hash table implementation dictht

/*
*  Hashtable 
*/
typedef struct dictht {
//  Hash table node pointer array ( Commonly known as the bucket ,bucket) Each element of an array is a point to  dictEntry  Pointer to structure  ,dictEntry  Save a key value pair , And one points to the other  dictEntry  Pointer to structure 
dictEntry **table;
//  The size of the pointer array 
unsigned long size;
//  Length mask of pointer array , Used to calculate index values 
unsigned long sizemask;
//  The number of nodes represented by hash 
unsigned long used;
} dictht;
/*
*  Hash table node 
*/
typedef struct dictEntry {
//  key 
void *key;
//  value 
union {
void *val;
uint64_t u64;
int64_t s64;
} v;
//  Chain to the next node 
struct dictEntry *next;
} dictEntry

chart

dictht Using chain address method to deal with key collision : When multiple different keys have the same hash value , The hash table uses a linked list to connect these keys .(HashMap One of them bucket Store entry Less than 8 Is also saved with a linked list , Can refer to understand )

 Insert picture description here

The hash algorithm

MurmurHash2 32 bit Algorithm Mainly use this

3.3 The steps of the dictionary

3.3.1 establish

 Insert picture description here

In the data structure of the dictionary ht[2] in Of each element table return to the capital after being exiled Not for table Property to allocate any space ;ht[0] Allocating space is the first time you add a key value pair ,ht[1] It's pointing rehash When

3.3.2 Add key value pair

Before adding Suppose it is a blank dictionary such as 3.3.1 As shown in

  • ht[0] Allocating space is the first time you add a key value pair

 Insert picture description here

  • There was a bond collision , So the program needs to deal with collisions ;

 Insert picture description here

  • To satisfy the rehash Conditions , Then you need to start the corresponding rehash Program

    The performance of a hash table depends on its size :size( Total number ) Number of attribute nodes :used (bucket Number of in use ) Ratio between attributes 1:1 best direct hash Got it.

 Insert picture description here

When the proportion gets bigger hash It becomes a linked list , The query advantage decreases ; Locate the bucket After that, you need to traverse the key value pairs in the linked list

 Insert picture description here

So every time before adding key value pairs , First judge whether it is satisfied rehash Conditions , ratio = used / size If any of the following conditions are met ,rehash The process will be Activate ( Be careful Activate Not execution )

  • natural rehash :ratio >= 1 , And variables dict_can_resize( When the system background thread executes the persistence task AOF BGSAVE etc. , For the time being false, Avoid memory collisions touch, Complete or become TRUE) It's true .
  • mandatory rehash : ratio Greater than the variable dict_force_resize_ratio ( In the current version ,dict_force_resize_ratio The value of is 5 ).

3.3.3 Rehash Execution process

Dictionary rehash The operation is actually to perform the following tasks , Mainly Increased the size of the hash table :

  1. Create a better environment than ht[0]->table Bigger ht[1]->table ( Size is at least ht[0]->used Twice as many ) ;
  2. take ht[0]->table All key value pairs in are migrated to ht[1]->table ;
  3. The original ht[0] Data clearing for , And will ht[1] Replace with new ht[0] ;

3.3.4 Progressive type rehash

Be careful : For triggered rehash Of

rehash It is divided into several parts , Gradual completion of , Progressive type rehash Mainly by _dictRehashStep and dictRehashMilliseconds Two functions do
• _dictRehashStep For database dictionary 、 And hash key dictionary for passive rehash , Add once at a time 、 lookup 、 Delete operation ;

 Insert picture description here

• dictRehashMilliseconds By Redis Server general task program (server cron job) perform , Used for active database dictionary rehash , Within a specified number of milliseconds , Go to the dictionary rehash;

notes : stay rehash In the process , Both hash tables are in use , Inquire about , Deletion and so on are performed in two hash tables ; New in ht[1] in

3.3.5 The contraction of the dictionary

rehash Can not only expand the capacity , It can also shrink ; It performs the following steps :

  1. Create a better environment than ht[0]->table Small ht[1]->table ;
  2. take ht[0]->table All key value pairs in are migrated to ht[1]->table ;
  3. The original ht[0] Data clearing for , And will ht[1] Replace with new ht[0] ;

When the fill rate of dictionary is lower than 10% when , The program can shrink the dictionary . One difference between dictionary contraction and dictionary expansion is :
• Dictionary expansion is automatically triggered ( Whether it's automatic extension or forced extension ) Call function judgment when deleting ;
• The contraction operation of the dictionary is performed by The program is executed manually

3.4 summary

  • Redis The database and hash keys in are based on dictionaries
  • Only in rehash In progress , Will be used at the same time 0 Number and 1 Hash table No .
  • Hash table uses chain address method to solve the problem of key conflict
  • The hash table rehash Many times 、 Proceed gradually
  • rehash Can not only expand the capacity , It can also shrink

4. Skip list

 Insert picture description here

• Header (head): The node pointer responsible for maintaining the jump table .
• Jump table node : Keep element values , And multiple layers .
• layer : Holds pointers to other elements . The number of elements crossed by the high level pointer is greater than or equal to that of the low level pointer , In order to improve the efficiency of searching , Programs always start at the top , And then as the range of element values shrinks , Slowly lower the level .
• Tail : All by NULL form , Indicates the end of the jump table .

4.1 Realization

When searching , Follow these steps :( Start at this layer , Left large On the right side of the small , Keep going down , Keep narrowing the range , Finally locate )

  1. Start at the top
  2. Find a layer larger than the one you are looking for key The previous element of the first element of
  3. Move down to the next level
  4. The leftmost part of each layer is infinitesimal , On the far right is infinity

Redis be based on William Pugh The jump table described in this paper has been modified as follows , Time complexity O(lg n)

  1. Allow for repetition score value : Many different member Of score The value can be the same .
  2. When performing a comparison operation , Not only to check score value , And check member : When score When the value can be repeated , Rely solely on score Value cannot determine the identity of an element , So we need to connect member All domains should be checked together .
  3. Each node has a height of 1 The back pointer of the layer , Used to iterate from the tail to the head : When executed ZREVRANGE or ZREVRANGEBYSCORE This kind of commands that deal with ordered sets in reverse order , Will be used

This attribute

typedef struct zskiplist {
//  Head node , Tail node 
struct zskiplistNode *header, *tail;
//  Number of nodes 
unsigned long length;
//  At present, the maximum number of layers of nodes in the table 
int level;
} zskiplist;

zskiplistNode

typedef struct zskiplistNode {
// member  object 
robj *obj;
//  The score is 
double score;
//  Back pointer 
struct zskiplistNode *backward;
//  layer 
struct zskiplistLevel {
//  Forward pointer 
    struct zskiplistNode *forward;
//  The number of nodes this layer spans 
unsigned int span;
} level[];
} zskiplistNode;

4.2 application

The only effect , Is to implement ordered set data types sorted set Of ordered sets score Values and member A pointer to a field as an element ;score The value is index , Sort the elements of an ordered set

4.3 summary

  • Randomized data structure
  • Redis The only use of is as an ordered set type
  • Redis be based on William Pugh The jump table described in this paper has been modified
原网站

版权声明
本文为[A god given dream never wakes up]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/03/202203010528446309.html