当前位置:网站首页>redis. Conf general configuration details

redis. Conf general configuration details

2022-06-13 02:28:00 Programmer ah Hong

# Redis  Sample configuration file 
#  Pay attention to the unit :  When the memory size needs to be configured ,  You may need to specify something like 1k,5GB,4M And so on 
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
#  Units are case insensitive  1GB 1Gb 1gB  It's the same .
################################## INCLUDES  Include files related to 
###################################
#  You can include one or more other configuration files here . If you have one that applies to all Redis Standard configuration template for the server 
#  But you also need some custom settings for each server , This feature will be useful . The included configuration file can also contain other configuration files ,
#  So you need to use this function carefully .
#
#  Be careful “inclue” Options cannot be admin or Redis Sentinel's "CONFIG REWRITE" Command rewriting .
#  because Redis Always use the last parsed configuration line as the value of the configuration instruction ,  You'd better configure... At the beginning of this file includes Come on 
#  Avoid it rewriting the configuration at run time .
#  If on the contrary, you want to use includes The configuration of overwrites the original configuration , You'd better use it at the end of the file include
#
# include /path/to/local.conf
# include /path/to/other.conf
################################ GENERAL  Comprehensive configuration 
#####################################
#  Default Rdis Will not run as a daemons . If necessary, configure to 'yes'
#  Note that after being configured as a daemon Redis Will write the process number to the file /var/run/redis.pid
daemonize no
#  When running as a daemon , Default Redis Will take the process ID writes  /var/run/redis.pid. You can modify the path here .
pidfile /var/run/redis.pid
#  The specific port that accepts the connection , The default is 6379
#  If the port is set to 0,Redis It won't monitor TCP Socket .
port 6379
# TCP listen() backlog.
# server Establish a connection with the client tcp In the process of connecting ,SYN The size of the queue 
#  In a high concurrency environment you need a high backlog Value to avoid slow client connection problems . Be careful Linux The kernel silently reduces this value 
#  To /proc/sys/net/core/somaxconn Value , So we need to confirm the increase somaxconn and tcp_max_syn_backlog
#  Two values to achieve the desired effect 
tcp-backlog 511
#  Default Redis Listen for connections to all available network interfaces on the server . It can be used "bind" The configuration instruction is followed by one or more ip Address to achieve 
#  Listen to one or more network interfaces 
#
#  Example :
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1
#  Specify to listen for Unix The path of the nested word . No default ,  So without specifying Redis I don't listen Unix Socket 
#
# unixsocket /tmp/redis.sock
# unixsocketperm 755
#  How many seconds after a client is idle, close the connection .(0 For disable , Never close )
timeout 0
# TCP keepalive.
#
#  If it's not zero , Is set SO_KEEPALIVE Option to send... To clients with idle connections ACK, This is useful for two reasons :
#
# 1) Ability to detect unresponsive peers 
# 2) Let the network device in the middle of the connection know that the connection is still alive 
#
#  stay Linux On , This specified value ( Company : second ) It's sending ACK Time interval of .
#  Be careful : It takes twice the time to close the connection .
#  On other kernels, this interval is determined by the kernel configuration 
#
#  A reasonable value for this option is 60 second 
tcp-keepalive 0
#  Specify the server debugging level 
#  It may be worth :
# debug ( A lot of information , For development / Test useful )
# verbose ( A lot of useful and concise information , But I don't like debug So many grades )
# notice ( The right amount of information , Basically what you need in your production environment )
# warning ( Only very important / Serious information will be recorded )
loglevel notice
#  Indicates the log file name . You can also use "stdout" To force Redis Write log information to standard output .
#  Be careful : If Redis Run as a daemons , If you set the log display to standard output , Logs will be sent to /dev/null
logfile ""
#  To use the system logger , Just set  "syslog-enabled"  by  "yes"  That's all right. .
#  Then set some other... As needed syslog Parameters are OK .
# syslog-enabled no
#  To specify syslog identity 
# syslog-ident redis
#  To specify syslog The equipment . Must be user or LOCAL0 ~ LOCAL7 One of .
# syslog-facility local0
#  Set the number of databases . The default database is  DB 0,
#  Can pass select <dbid> (0 <= dbid <= 'databases' - 1 ) To use a different database for each connection .
databases 16
################################ SNAPSHOTTING  snapshot , Persistent operation configuration 
################################
#
#  Save the database to disk :
#
# save <seconds> <changes>
#
#  The database will be written to disk after specifying the number of seconds and the number of data changes .
#
#  The following example will write data to disk :
# 900 second (15 minute ) after , And at least 1 Change 
# 300 second (5 minute ) after , And at least 10 Change 
# 60 Seconds later , And at least 10000 Change 
#
#  Be careful : If you don't want to write to the disk, put all  "save"  Just comment it out .
#
#  By adding a string with an empty string parameter save The command can also remove all previously configured save Instructions 
#  Like the following example :
# save ""
save 900 1
save 300 10
save 60 10000
#  If enabled by default RDB snapshot ( At least one save Instructions ) And the latest background save failed ,Redis Will stop accepting writes 
#  This will let the user know that the data is not properly persisted to the hard disk , Otherwise, no one may notice and cause some disasters .
#
#  If the background save process can restart ,Redis Will automatically allow write operations 
#
#  However, if you have deployed the appropriate Redis Server and persistent monitoring , You may want to turn off this feature so that even 
#  Hard disk , There's something wrong with permissions Redis Can also work as usual ,
stop-writes-on-bgsave-error yes
#  When exporting to  .rdb  Whether to use LZF Compress string objects ?
#  The default setting is  "yes", Because it's good in almost any case .
#  If you want to save CPU You can set this to  "no", But if you have compressible key and value Words ,
#  Then the data file will be bigger .
rdbcompression yes
#  Because of the version 5 Of RDB There is one CRC64 The checksum of the algorithm is placed at the end of the file . This will make the file format more reliable, but in 
#  Production and loading RDB When you file , There is a performance penalty ( about 10%), So you can turn it off to get the best performance .
#
#  Generated to close verification RDB There is a 0 Checksum , It will tell the loading code to skip the check 
rdbchecksum yes
#  The file name of the persistent database 
dbfilename dump.rdb
#  working directory 
#
#  The database will be written to this directory , The file name is the above  "dbfilename"  Value .
#
#  The accumulation file is also put here .
#
#  Note that what you specify here must be a directory , It's not a filename .
dir ./
################################# REPLICATION  Configuration of master-slave replication 
#################################
#  Master slave synchronization . adopt  slaveof  Instructions to implement Redis Instance backup .
#  Be careful , Here's the local copy from the remote . in other words , There can be different database files locally 、 Bind different IP、 monitor 
#  Different ports .
#
# slaveof <masterip> <masterport>
#  If master Password protection is set ( adopt  "requirepass"  Options to configure ), that slave Before starting synchronization, you must 
#  Authentication , Otherwise, its synchronization request will be rejected .
#
# masterauth <master-password>
#  When one slave Lose and master The connection of , Or synchronization is in progress ,slave There are two possible ways to behave :
#
# 1)  If  slave-serve-stale-data  Set to  "yes" ( The default value is ),slave Will continue to respond to client requests ,
#   It could be normal data , It can also be empty data that hasn't got a value yet .
# 2)  If  slave-serve-stale-data  Set to  "no",slave Will reply " From master Sync 
# (SYNC with master in progress)" To handle requests , except  INFO  and  SLAVEOF  command .
#
slave-serve-stale-data yes
#  You can configure salve Does the instance accept write operations . Writable slave Instances may be useful for storing temporary data ( Because writing salve
#  The data are in the same master After synchronization, it will be easy to delete ), However, if the client writes due to configuration errors, some problems may occur .
#
#  from Redis2.6 Default owned slave As read-only 
#
#  Be careful : read-only slave Not designed to expose untrusted clients on the Internet . It is just a protective layer against instance misuse .
#  A read-only slave Support all management commands, such as config,debug etc. . In order to limit, you can use 'rename-command' Come on 
#  Hide all administrative and dangerous commands to enhance read-only slave The security of 
slave-read-only yes
# slave According to the specified time interval to master send out ping request .
#  The time interval can pass through  repl_ping_slave_period  To set up .
#  Default 10 second .
#
# repl-ping-slave-period 10
#  The following options set the timeout for synchronization 
#
# 1)slave With the master SYNC There is a lot of data transmission during , Cause timeout 
# 2) stay slave angle ,master Overtime , Including data 、ping etc. 
# 3) stay master angle ,slave Overtime , When master send out REPLCONF ACK pings
#
#  Make sure that this value is greater than the specified repl-ping-slave-period, Otherwise, timeout will be detected every time when the traffic between master and slave is not high 
#
# repl-timeout 60
#  Whether in slave Socket send SYNC Disable after  TCP_NODELAY ?
#
#  If you choose “yes”Redis Will use less TCP Packets and bandwidth come from slaves send data . But this will allow data to be transmitted to slave
#  There's a delay on ,Linux The default configuration of the kernel will be 40 millisecond 
#
#  If you choose  "no"  Data transfer to salve The latency will be reduced but more bandwidth will be used 
#
#  By default, we will optimize for low latency , However, in case of high traffic or too many hops between master and slave , Set this option to “yes”
#  It's a good choice .
repl-disable-tcp-nodelay no
#  Set up data backup backlog size .backlog It's a slave Record when disconnected for a period of time salve Data buffering ,
#  So a slave When reconnecting , There is no need for full synchronization , But an incremental synchronization is enough , Will be in the disconnected section 
#  Within time slave Some of the lost data is transmitted to it .
#
#  synchronous backlog The bigger it is ,slave The longer it takes to enable incremental synchronization and allow disconnection .
#
# backlog Assign only once and need at least one slave Connect 
#
# repl-backlog-size 1mb
#  When master No longer with any... For a period of time slave Connect ,backlog Will release . The following options are configured from the last one 
# slave How many seconds after disconnection starts ,backlog The buffer will be released .
#
# 0 It means never release backlog
#
# repl-backlog-ttl 3600
# slave The priority of is an integer shown in Redis Of Info Output in progress . If master It doesn't work anymore , The Sentry will use it to 
#  Select a slave promote = Promoted to master.
#
#  The priority number is small salve Will prioritize promotion to master, So for example, there are three slave The priorities are 10,100,25,
#  The Sentry will pick a priority with a minimum number of 10 Of slave.
#
# 0 As a special priority , Identify this slave Can not act as master, So a priority is 0 Of slave Never be 
#  The Sentinels chose to upgrade to master
#
#  The default priority is 100
slave-priority 100
#  If master Less than N A delay is less than or equal to M Seconds connected slave, You can stop receiving writes .
#
# N individual slave Need to be “oneline” state 
#
#  The delay is in seconds , And must be less than or equal to the specified value , It's from the last one slave The received ping( Usually send... Per second )
#  Start counting .
#
# This option does not GUARANTEES that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
#  For example, at least 3 A delay is less than or equal to 10 Of a second slave Use the following instructions :
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
#  One of the two is set to 0 This feature will be disabled .
#
#  Default  min-slaves-to-write  The value is 0( This function is disabled ) also  min-slaves-max-lag  The value is 10.
################################## SECURITY  Security related configuration 
###################################
#  The client is required to verify the identity and password when processing any command .
#  This function can be accessed by other clients you don't trust redis Server environment is very useful .
#
#  This paragraph should be commented out for backward compatibility . And most people don't need authentication ( for example : They run on their own servers )
#
#  Warning : because Redis It's too fast , So people outside can try every second 150k To try to crack the password . That means you need to 
#  A strong password , Otherwise, it's too easy to crack .
#
# requirepass foobared
#  Command rename 
#
#  In a shared environment , You can change the name for a dangerous command . such as , You can have the  CONFIG  Change another name that is not easy to guess ,
#  In this way, the internal tools can still be used , Ordinary clients will not .
#
#  for example :
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
#  You can also disable a command completely by renaming it to an empty string 
#
# rename-command CONFIG ""
#
#  Please note that : The change command name is recorded in AOF Files or being transferred to the slave server may cause problems .
################################### LIMITS  Range configuration 
####################################
#  Set the maximum number of clients connected at the same time . The default limit is 10000 A client , However, if Redis The server cannot be configured 
#  Process the limit number of files to meet the specified value , Then the maximum number of client connections is set to the current file limit minus 32( because 
#  by Redis The server retains some file descriptors for internal use )
#
#  Once this limit is reached ,Redis Will close all new connections and send errors 'max number of clients reached'
#
# maxclients 10000
#  Don't use more memory than the set upper limit . Once memory usage reaches the limit ,Redis According to the selected recycling strategy ( See :
# maxmemmory-policy) Delete key
#
#  If because of the deletion policy Redis Cannot delete key, Or the policy is set to  "noeviction",Redis Will reply to need more 
#  Multiple memory error messages to the command . for example ,SET,LPUSH wait , But will continue to respond like Get Such a read-only command .
#
#  In the use of Redis As LRU cache , Or when a hard memory limit is set for the instance ( Use  "noeviction"  Strategy )
#  When , This option is usually useful .
#
#  Warning : When there is more than one slave When connecting instances that have reached the memory limit ,master For synchronization slave Required for output buffer 
#  Memory is not calculated in use memory . So when you expel key when , It won't be caused by network problems  /  The resynchronization event triggers an eviction key
#  The cycle of , In turn, slaves The output buffer of is full of key Expelled DEL command , This will trigger the deletion of more key,
#  Until the database is completely emptied 
#
#  All in all ... If you need to add more slave, It is recommended that you set a slightly smaller maxmemory Limit , In this way, the system will be idle 
#  Memory as slave Output buffer for ( But if the maximum memory policy is set to "noeviction" Then it's not necessary )
#
# maxmemory <bytes>
#  Maximum memory strategy : If the memory limit is reached ,Redis How to choose to delete key. You can choose from the following five behaviors :
#
# volatile-lru ->  according to LRU Algorithm generated expiration time to delete .
# allkeys-lru ->  according to LRU Algorithm delete any key.
# volatile-random ->  Randomly delete... Based on expiration settings key.
# allkeys->random ->  There is no difference in random deletion .
# volatile-ttl ->  Delete... Based on the latest expiration time ( supplemented TTL)
# noeviction ->  No one will delete , Returns an error when writing directly .
#
#  Be careful : For all strategies , If Redis Could not find a suitable to delete key Will return an error during the write operation .
#
#    The orders involved so far :set setnx setex append
#   incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#   sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#   zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#   getset mset msetnx exec sort
#
#  The default values are as follows :
#
# maxmemory-policy volatile-lru
# LRU And minimum TTL The implementation of the algorithm is not very accurate , But it's close to ( To save memory ), So you can test with the sample size .
#  for example : Default Redis Will check the 3 individual key Then take the oldest one , You can set the number of samples through the following configuration instructions .
#
# maxmemory-samples 3
############################## APPEND ONLY MODE AOF Mode configuration 
###############################
#  By default ,Redis It's asynchronous to export data to disk . This model is good enough in many applications , but Redis process 
#  A problem or power failure may result in a period of write loss ( It depends on the configuration save Instructions ).
#
# AOF Is a more reliable alternative persistence model , For example, use the default data write file policy ( See configuration later )
#  In case of server power failure or single write Redis When there is something wrong with its own process, but the operating system is still running normally and other emergencies ,Redis
#  Can only lose 1 Second write operation .
#
# AOF and RDB Persistence can be started at the same time without problems .
#  If AOF Turn on , So at startup Redis Will load AOF file , It can guarantee the reliability of data .
#
#  Please check out  http://redis.io/topics/persistence  For more information .
appendonly no
#  Pure cumulative file name ( Default :"appendonly.aof")
appendfilename "appendonly.aof"
# fsync()  The system call tells the operating system to write data to disk , Instead of waiting for more data to enter the output buffer .
#  Some operating systems will really brush data to disk immediately ; Some will try this as soon as possible .
#
# Redis Support three different modes :
#
# no: Don't brush at once , Only when the operating system needs to be swiped . Faster .
# always: Every write is immediately written to aof file . slow , But the safest .
# everysec: Write once a second . A compromise .
#
#  default  "everysec"  Generally speaking, a good balance can be achieved between speed and data security . According to your understanding 
#  decision , If you can relax the configuration to "no"  For better performance ( But if you can tolerate some data loss , Consider using 
#  The default snapshot persistence mode ), Or vice versa , use “always” It will be slower, but slower than everysec To be safer .
#
#  Please check the following article for more details 
# http://antirez.com/post/redis-persistence-demystified.html
#
#  If you are not sure , Just use  "everysec"
# appendfsync always
appendfsync everysec
# appendfsync no
#  If AOF The synchronization policy of is set to  "always"  perhaps  "everysec", And the background storage process ( Background storage or write AOF
#  journal ) Will produce a lot of disks I/O expenses . some Linux Under the configuration of, it will make Redis because  fsync() Blocked by system calls for a long time .
#  Be careful , At present, this situation has not been perfectly corrected , Even different threads  fsync()  It will block our synchronization write(2) call .
#
#  To alleviate the problem , You can use the following option . It can be  BGSAVE  or  BGREWRITEAOF  Prevent when processing fsync().
#
#  This means that if a child process is saving , that Redis It's in " Not synchronized " The state of .
#  This actually means , In the worst case, it might be lost 30 Seconds of log data .( Default Linux Set up )
#
#  If you set this to "yes" It brings the problem of delay , Just keep it "no", This is the safest way to keep persistent data .
no-appendfsync-on-rewrite no
#  Automatic override AOF file 
#  If AOF Increase the log file to the specified percentage ,Redis Can pass  BGREWRITEAOF  Automatic override AOF Log files .
#
#  working principle :Redis Remember the last time you rewritten AOF File size ( If there is no write operation after restart , Just use the... At startup AOF size )
#
#  This benchmark size is compared with the current size . If the current size exceeds the specified scale , Will trigger the rewrite operation . You also need to specify to be overridden 
#  Minimum log size , This avoids rewriting when the specified percentage is reached but the size is still very small .
#
#  The specified percentage is 0 Will disable AOF Auto override feature .
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
################################ LUA SCRIPTING ###############################
#  Set up lua The maximum running time of the script , The unit is millisecond ,redis I'll remember log, Then return error.
#  When a script exceeds the maximum time limit . Only SCRIPT KILL and SHUTDOWN NOSAVE It can be used . The first one can be killed without adjustment write Command things . If already 
#  Called write, You can only kill with the second order .
lua-time-limit 5000
################################## SLOW LOG ###################################
#  yes redis A logging system used to record the execution time of slow queries . because slowlog Save only in memory , therefore slowlog It's very efficient , Don't worry about it at all redis Performance of .
#  Only query Execution time is greater than slowlog-log-slower-than Only then can we define slow query , Will be slowlog For recording .
#  The unit is subtle 
slowlog-log-slower-than 10000
# slowlog-max-len Indicates the maximum number of slow queries 
slowlog-max-len 128
############################ EVENT NOTIFICATION ##############################
#  This function allows the client to subscribe to a given channel or mode , To know the changes of keys in the database , And the execution of commands in the database , So in the default configuration , The function is off .
# notify-keyspace-events  Can be any combination of the following characters , It specifies what types of notifications the server should send :
# K  Key space notification , All notices to  [email protected]__  The prefix 
# E  Key event notification , All notices to  [email protected]__  The prefix 
# g DEL 、 EXPIRE 、 RENAME  Notification of generic commands that are not related to the type 
# $  String command notification 
# l  Notification of list commands 
# s  Notice of assembly order 
# h  Notification of hash command 
# z  Notice of an ordered set order 
# x  Overdue Events : Send whenever an expired key is deleted 
# e  deportation (evict) event : Whenever there is a key because  maxmemory  Send when policy is deleted 
# A  Parameters  g$lshzxe  Another name for 
#  At least one of the input parameters must be  K  perhaps  E, Otherwise , Whatever the rest of the parameters are , There won't be any   The notice was distributed . For detailed use, please refer to http://redis.io/topics/notifications
notify-keyspace-events ""
############################### ADVANCED CONFIG ###############################
#  Unit byte : The amount of data is less than or equal to hash-max-ziplist-entries The use of ziplist, Greater than hash-max-ziplist-entries use hash
hash-max-ziplist-entries 512
# value Less than or equal to hash-max-ziplist-value The use of ziplist, Greater than hash-max-ziplist-value use hash.
hash-max-ziplist-value 64
#  The amount of data is less than or equal to list-max-ziplist-entries use ziplist( Compressed list ), Greater than list-max-ziplist-entries use list.
list-max-ziplist-entries 512
# value Less than or equal to list-max-ziplist-value The use of ziplist, Greater than list-max-ziplist-value use list.
list-max-ziplist-value 64
#  The amount of data is less than or equal to set-max-intset-entries use iniset, Greater than set-max-intset-entries use set.
set-max-intset-entries 512
#  The amount of data is less than or equal to zset-max-ziplist-entries use ziplist, Greater than zset-max-ziplist-entries use zset.
zset-max-ziplist-entries 128
# value Less than or equal to zset-max-ziplist-value use ziplist, Greater than zset-max-ziplist-value use zset.
zset-max-ziplist-value 64
#  The algorithm of cardinality Statistics  HyperLogLog  Keys only cost  12 KB  Memory , So we can calculate the proximity  2^64  Cardinality of different elements 
#  Set up HyeperLogLog Byte limit for , This value is usually in 0~15000 Between , The default is 3000, Basically no more than 16000.
# value Less than or equal to hll-sparse-max-bytes Use sparse data structure (sparse), Greater than hll-sparse-max-
# bytes Use dense data structures (dense). A ratio 16000 Big value It's almost useless , Suggested value It's about 
# 3000. If the CPU Not very demanding , Having a high demand for space , It is recommended to 10000 about .
hll-sparse-max-bytes 3000
#  Reset hash. Redis Will be in every 100 Millisecond use 1 ms CPU Time to redis Of hash Table reset hash,
#  Memory usage can be reduced . When you use the scene , There is a very strict real-time requirement , inadmissibility Redis From time to time there are requests 2 millisecond 
#  If the delay , So let's configure this as no. If there is no such strict real-time requirements , It can be set to yes, In order to be able to interpret as quickly as possible 
#  Let in memory .
activerehashing yes
#  about Redis The output of the server ( That is, the return value of the command ) Come on , Its size is usually uncontrollable . There may be a simple life 
#  Make , It can generate huge return data . It's also possible that too many commands are executed , The rate of generating returned data exceeded that of the previous 
#  The rate at which clients send , This will also cause the server to pile up a lot of messages , As a result, the output buffer becomes larger and larger , Taking up too much memory , what 
#  To cause the system to crash .
#  Used to force the disconnection of a client that cannot read data from the server fast enough for some reason .
# about normal client, Include monitor. first 0 To cancel hard limit, the second 0 And the third 0 To cancel 
# soft limit,normal client Uncheck by default , Because if you don't ask , They don't receive data .
client-output-buffer-limit normal 0 0 0
# about slave client and MONITER client, If client-output-buffer Once you surpass 256mb, Or more than 64mb continued 60 second , Then the server immediately disconnects the client .
client-output-buffer-limit slave 256mb 64mb 60
# about pubsub client, If client-output-buffer Once you surpass 32mb, Or more than 8mb continued 60 second , Then the server immediately disconnects the client .
client-output-buffer-limit pubsub 32mb 8mb 60
# redis Frequency of task execution 
hz 10
# aof rewrite In the process , Whether to take incremental measures " File synchronization " Strategy , The default is "yes", And must be yes.
# rewrite In the process , Every time 32M A file synchronization of data , This can reduce "aof A large file " Number of write operations to disk .
aof-rewrite-incremental-fsync yes```

原网站

版权声明
本文为[Programmer ah Hong]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/164/202206130223055501.html