当前位置:网站首页>Redis concludes that the second pipeline publishes / subscribes to bloom filter redis as a database and caches RDB AOF redis configuration files
Redis concludes that the second pipeline publishes / subscribes to bloom filter redis as a database and caches RDB AOF redis configuration files
2022-07-03 21:21:00 【Haozhan】
The Conduit
Why use pipes ?
For example, a number of times incr such . If you have a pipeline, you can send multiple commands to the server at the same time , Don't wait for a reply one by one , Just read the reply in the last step .
Release pub/ subscribe sub
List Medium Bxxx Is a blocked unicast queue .
Publishing is multicast .
Method :
- publish and subscribe
problem : Later subscribers can't see the previous news
solve : Historical news ->
1. Where to put the historical news ?
Historical news is divided into recent ( for example 1 Days. ): use redis cache ( fast ) use sorted_set
Longer history : Full data in the databaseHow to use it? sorted_set
zremrangebyrank To delete the previous
To publish a piece of data, you need 1. Publish subscribe 2. Add to cache 3. May be through Kafka Add to database
have access to redis Business (multi, exec, discard, watch)
Redis Business
multi: Mark open transaction , exec Start executing the transaction , watch Monitor some key( If key By change Just don't implement it at all , return nil)
Redis Rollback is not supported
Whose? exec Who comes first performs the business first .
problem : redis Rollback is not supported , The wrong one is not executed, and the others are executed .
solve : use watch, key Modification will not be implemented
The bloon filter RedisBloom
What do you do? :
An extension library to solve cache penetration
How do you use it? ?
First go to the official website redis.io look for module In looking for bloom. ① After downloading , After decompression , ②make obtain redisbloom.so expanded memory bank .
- ③ Put the extension library into redis In the directory in cp redisbloom.so / install redis A directory of /redis ( recommend )
- redis-server --loadmodule /make The position after /redisbloom.so
④ then redis-server --loadmodule /redis Installation position /redisbloom.so( You need an absolute path )
How to solve cache penetration ?
Elements By mapping functions take bit The array is set to 1.
If you add, set the element corresponding to the mapped position to 1.
Search is to find whether the corresponding mapping functions are 1, All for 1 Just check , There is one for 0 It means that it has not been added ( There is no ).
There will also be missed fish > 1%: add to a and b Set up 1 It just covers c Of bit position , bloom You will think that there is an element , But in fact, there was no .
If you penetrate bron , The actual database does not exist , Client added redis Medium key and value Tag does not exist , If you still have another time, you can intercept it by this method
Use :
BF.add k1 abc
BF.exists k1 abc ( return 1, Yes )
BF.exists k1 sadfg( return 0, No, )
advantage : Low implementation cost .
shortcoming : There is no way to delete , Deleting an element also cannot set the binary position to 0( Because it may be set to 1, What if you delete an element )
terms of settlement : counting bloom Filters and cuckoo filters .
counting bloom: counting bloom It's the filter of bron bitmap Replace with array , When a position in the array is mapped once +1, When deleted -1, This avoids the need to recalculate the remaining packets after the data is deleted by the ordinary bloom filter Hash The problem of , But there is no way to avoid miscarriage of justice
Cuckoo filter
Cuckoo …
Redis As caching
The cached data needs to change .
① According to time
② According to the cold and hot data ( Eliminate cold data , Keep thermal data )
Redis As caching , How much memory is given ? (1G-10G best )
[Redis The configuration file ] stay /etc/redis/6379.conf In the file is its configuration ( According to your redis compile make The location of at that time ./install_server.sh 6379 Default location )
Recycling strategies include : The picture above noeviction( Use it when making a database ), volatile-lru, allkeys-lru…
lru and lfu The difference between : lru How long have you not used this data , lfu Is how many times this data is used
You can set key Add one when you are ex(expire Expiration time )
set key1 a ex 50 (50 Seconds after expired , count down )
The upper and lower methods are equal
set key1
expire key1 50
ttl k1 ( View expiration time )
expireat ( Fixed time expires )
When to expire ? How to eliminate expired ones keys?
You can't see what hasn't expired all the time , Too wasteful cpu
Two ways :
- Passive way : I found it doesn't expire when I visited , Clean up again .
problem : Long time no access takes up memory - Active way :
- Every time 10 Second test random 20 individual keys Perform expiration detection
- Delete all expired keys.
- If more than 25% Of keys Be overdue , Repeat step 1
The goal is to pursue speed , Sacrifice some memory
Redis As a database
Redis It's memory storage , Loss of power
resolvent : snapshot / copy and journal Roll back
Redis in ①RDB snapshot , copy ②AOF journal
RDB
How to ensure timeliness ( Saving is real-time ).
linux In the parent-child process, whoever modifies the data will not affect each other's data
How to copy ? fork This system call ( The pointer ). (copy on write When writing copy )
effect : Creating sub processes is fast , Small space
Redis How to achieve rdb:
- Triggered by human command :
save command ( The front desk is blocked ) and bgsave( Asynchronous non-blocking Trigger fork) - The configuration file modify save -> The trigger is bgsave
save 60 10000 It means to achieve 60 Reach in seconds 10000 A modification will write rdb
save 300 100 It means that the last one did not trigger arrive 300 Seconds ago, the modification reached 10 Can write rdb
save 3600 1 It means 3600 Write as long as there is an operation in seconds rdb
rdb What is the name of the document , Where does it exist ?
disadvantages : ① Zippers are not supported Once the rdb The file was overwritten later .
② Data before the window may be lost
advantage : rdb Files are similar to serialization , The recovery speed is extremely fast
snapshot , Log files exist by default /var/lib/redis/redis Under the process number directory
AOF
Redis Write operations are recorded in a file .
advantage : Not easy to lose data
shortcoming : Large volume , Slow recovery ( These two points can be solved by turning on the mixed mode )
redis Can be opened at the same time rdb and aof At the same time open , But it will be used when recovering aof
4.0 after aof Contained in the rdb Full data + Incremental write operation
Why is it designed this way? ? rdb Fast recovery . Fastest mixing
4.0 Previously, there were only merged functions rewrite ( Delete the offset command ) But in the end, it is also a pure instruction file
4.0 In the future, I will take the old rdb Put it in aof In front of the document , Increment by instruction append To aof After the document
problem : Redis Is a memory database The purpose is to be fast . But the write operation will aof The document says . Will trigger IO. What do I do ?
Redis There are 3 A level : no( May be lost buffer Size data ), always( The most reliable data , But it consumes resources ), everysec( compromise )
appendonly The default is no. aof The default is off
Default aof Turn on aof and rdb Mixed mode .
Turn on mixed mode :
aof-use-rdb-preamble yes
How to judge 4.0 After the aof file , See if there is REDIS
Method :
- bgrewriteaof
bg(background) rewrite aof
Backstage rewrite aof file . - Use bgwriteaof after
To configure auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
边栏推荐
- 2022 safety officer-c certificate examination and safety officer-c certificate registration examination
- Global and Chinese market of AC induction motors 2022-2028: Research Report on technology, participants, trends, market size and share
- Custom view incomplete to be continued
- MySQL——索引
- Talk about daily newspaper design - how to write a daily newspaper and what is the use of a daily newspaper?
- The "boss management manual" that is wildly spread all over the network (turn)
- Last week's content review
- Etcd raft Based Consistency assurance
- 内存分析器 (MAT)
- Global and Chinese market of telematics boxes 2022-2028: Research Report on technology, participants, trends, market size and share
猜你喜欢
Qt6 QML Book/Qt Quick 3D/基础知识
Hcie security Day11: preliminarily learn the concepts of firewall dual machine hot standby and vgmp
Pengcheng cup Web_ WP
Compilation Principle -- syntax analysis
Capturing and sorting out external articles -- autoresponder, composer, statistics [III]
[gd32l233c-start] 5. FLASH read / write - use internal flash to store data
17 websites for practicing automated testing. I'm sure you'll like them
Link aggregation based on team mechanism
Gee calculated area
[secretly kill little buddy pytorch20 days -day02- example of image data modeling process]
随机推荐
Transformation between yaml, Jason and Dict
Qt6 QML Book/Qt Quick 3D/基础知识
What if the Flink SQL client exits and the table is emptied?
Rhcsa third day notes
Install and use Chrony, and then build your own time server
Global and Chinese market of recycled yarn 2022-2028: Research Report on technology, participants, trends, market size and share
JS three families
University of Electronic Science and technology | playback of clustering experience effectively used in reinforcement learning
Visiontransformer (I) -- embedded patched and word embedded
Transformer structure analysis and the principle of blocks in it
[Yugong series] go teaching course 002 go language environment installation in July 2022
【愚公系列】2022年7月 Go教学课程 002-Go语言环境安装
Etcd 基于Raft的一致性保证
Yiwen teaches you how to choose your own NFT trading market
Quickly distinguish slices and arrays
Notes on MySQL related knowledge points (startup, index)
Qualcomm platform WiFi -- P2P issue
Offset related concepts + drag modal box case
Preliminary understanding of C program design
MySQL——idea连接MySQL