By default Redis does not run as a daemon. Use 'yes' if you need it.
Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
Redis默認不是以守護進程的方式運行,可以通過該配置項修改,使用yes啟用守護進程(守護進程(daemon)是指在UNIX或其他多任務操作系統中在后臺執行的電腦程序,并不會接受電腦用戶的直接操控。)
daemonize no
When running daemonized, Redis writes a pid file in /var/run/redis.pid by
default. You can specify a custom pid file location here.
當 Redis 以守護進程的方式運行的時候,Redis 默認會把 pid 文件放在/var/run/redis.pid,你可以配置到其他地址。當運行多個 redis 服務時,需要指定不同的 pid 文件和端口
pidfile /var/run/redis.pid
Accept connections on the specified port, default is 6379.
If port 0 is specified Redis will not listen on a TCP socket.
端口沒什么好說的
port 6379
If you want you can bind a single interface, if the bind option is not
specified all the interfaces will listen for incoming connections.
指定Redis可接收請求的IP地址,不設置將處理所有請求,建議生產環境中設置
bind 127.0.0.1
Close the connection after a client is idle for N seconds (0 to disable)
客戶端連接的超時時間,單位為秒,超時后會關閉連接
timeout 0
Specify the log file name. Also 'stdout' can be used to force
Redis to log on the standard output. Note that if you use standard
output for logging but daemonize, logs will be sent to /dev/null
配置 log 文件地址,默認打印在命令行終端的窗口上
logfile stdout
Set the number of databases. The default database is DB 0, you can select
a different one on a per-connection basis using SELECT <dbid> where
dbid is a number between 0 and 'databases'-1
設置數據庫的個數,可以使用 SELECT <dbid>命令來切換數據庫。默認使用的數據庫是 0
databases 16
Save the DB on disk:
save <seconds> <changes>
Will save the DB if both the given number of seconds and the given
number of write operations against the DB occurred.
In the example below the behaviour will be to save:
after 900 sec (15 min) if at least 1 key changed
after 300 sec (5 min) if at least 10 keys changed
after 60 sec if at least 10000 keys changed
Note: you can disable saving at all commenting all the "save" lines.
設置 Redis 進行數據庫鏡像的頻率。
900秒之內有1個keys發生變化時
30秒之內有10個keys發生變化時
60秒之內有10000個keys發生變化時
save 900 1
save 300 10
save 60 10000
Compress string objects using LZF when dump .rdb databases?
For default that's set to 'yes' as it's almost always a win.
If you want to save some CPU in the saving child set it to 'no' but
the dataset will likely be bigger if you have compressible values or keys.
在進行鏡像備份時,是否進行壓縮
rdbcompression yes
The filename where to dump the DB
鏡像備份文件的文件名
dbfilename dump.rdb
The working directory.
The DB will be written inside this directory, with the filename specified
above using the 'dbfilename' configuration directive.
Also the Append Only File will be created inside this directory.
Note that you must specify a directory here, not a file name.
數據庫鏡像備份的文件放置的路徑。這里的路徑跟文件名要分開配置是因為 Redis 在進行備份時,先會將當前數據庫的狀態寫入到一個臨時文件中,等備份完成時,再把該該臨時文件替換為上面所指定的文件,
而這里的臨時文件和上面所配置的備份文件都會放在這個指定的路徑當中
dir ./
Master-Slave replication. Use slaveof to make a Redis instance a copy of
another Redis server. Note that the configuration is local to the slave
so for example it is possible to configure the slave to save the DB with a
different interval, or to listen to another port, and so on.
設置該數據庫為其他數據庫的從數據庫
slaveof <masterip> <masterport>
If the master is password protected (using the "requirepass" configuration
directive below) it is possible to tell the slave to authenticate before
starting the replication synchronization process, otherwise the master will
refuse the slave request.
指定與主數據庫連接時需要的密碼驗證
masterauth <master-password>
Require clients to issue AUTH <PASSWORD> before processing any other
commands. This might be useful in environments in which you do not trust
others with access to the host running redis-server.
This should stay commented out for backward compatibility and because most
people do not need auth (e.g. they run their own servers).
Warning: since Redis is pretty fast an outside user can try up to
150k passwords per second against a good box. This means that you should
use a very strong password otherwise it will be very easy to break.
設置客戶端連接后進行任何其他指定前需要使用的密碼。
警告:redis速度相當快,一個外部的用戶可以在一秒鐘進行150K次的密碼嘗試,你需要指定非常非常強大的密碼來防止暴力破解。
requirepass foobared
Set the max number of connected clients at the same time. By default there
is no limit, and it's up to the number of file descriptors the Redis process
is able to open. The special value '0' means no limits.
Once the limit is reached Redis will close all the new connections sending
an error 'max number of clients reached'.
限制同時連接的客戶數量。當連接數超過這個值時,redis 將不再接收其他連接請求,客戶端嘗試連接時將收到 error 信息
maxclients 128
Don't use more memory than the specified amount of bytes.
When the memory limit is reached Redis will try to remove keys
accordingly to the eviction policy selected (see maxmemmory-policy).
If Redis can't remove keys according to the policy, or if the policy is
set to 'noeviction', Redis will start to reply with errors to commands
that would use more memory, like SET, LPUSH, and so on, and will continue
to reply to read-only commands like GET.
This option is usually useful when using Redis as an LRU cache, or to set
an hard memory limit for an instance (using the 'noeviction' policy).
WARNING: If you have slaves attached to an instance with maxmemory on,
the size of the output buffers needed to feed the slaves are subtracted
from the used memory count, so that network problems / resyncs will
not trigger a loop where keys are evicted, and in turn the output
buffer of slaves is full with DELs of keys evicted triggering the deletion
of more keys, and so forth until the database is completely emptied.
In short… if you have slaves attached it is suggested that you set a lower
limit for maxmemory so that there is some free RAM on the system for slave
output buffers (but this is not needed if the policy is 'noeviction').
設置redis能夠使用的最大內存。當內存滿了的時候,如果還接收到set命令,redis將先嘗試剔除設置過expire信息的key,而不管該key的過期時間還沒有到達。
在刪除時,將按照過期時間進行刪除,最早將要被過期的key將最先被刪除。如果帶有expire信息的key都刪光了,那么將返回錯誤。
這樣,redis將不再接收寫請求,只接收get請求。maxmemory的設置比較適合于把redis當作于類似memcached 的緩存來使用
maxmemory <bytes>
By default Redis asynchronously dumps the dataset on disk. If you can live
with the idea that the latest records will be lost if something like a crash
happens this is the preferred way to run Redis. If instead you care a lot
about your data and don't want to that a single record can get lost you should
enable the append only mode: when this mode is enabled Redis will append
every write operation received in the file appendonly.aof. This file will
be read on startup in order to rebuild the full dataset in memory.
Note that you can have both the async dumps and the append only file if you
like (you have to comment the "save" statements above to disable the dumps).
Still if append only mode is enabled Redis will load the data from the
log file at startup ignoring the dump.rdb file.
IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
log file in background when it gets too big.
默認情況下,redis 會在后臺異步的把數據庫鏡像備份到磁盤,但是該備份是非常耗時的,而且備份也不能很頻繁,如果發生諸如拉閘限電、拔插頭等狀況,那么將造成比較大范圍的數據丟失。
所以redis提供了另外一種更加高效的數據庫備份及災難恢復方式。
開 啟append only 模式之后,redis 會把所接收到的每一次寫操作請求都追加到appendonly.aof 文件中,當redis重新啟動時,會從該文件恢復出之前的狀態。
但是這樣會造成 appendonly.aof 文件過大,所以redis還支持了BGREWRITEAOF 指令,對appendonly.aof進行重新整理
appendonly no
The fsync() call tells the Operating System to actually write data on disk
instead to wait for more data in the output buffer. Some OS will really flush
data on disk, some other OS will just try to do it ASAP.
Redis supports three different modes:
no: don't fsync, just let the OS flush the data when it wants. Faster.
always: fsync after every write to the append only log . Slow, Safest.
everysec: fsync only if one second passed since the last fsync. Compromise.
The default is "everysec" that's usually the right compromise between
speed and data safety. It's up to you to understand if you can relax this to
"no" that will will let the operating system flush the output buffer when
it wants, for better performances (but if you can live with the idea of
some data loss consider the default persistence mode that's snapshotting),
or on the contrary, use "always" that's very slow but a bit safer than
everysec.
If unsure, use "everysec".
設置對 appendonly.aof 文件進行同步的頻率。always 表示每次有寫操作都進行同步,everysec 表示對寫操作進行累積,每秒同步一次。
appendfsync always
appendfsync everysec
appendfsync no
Virtual Memory allows Redis to work with datasets bigger than the actual
amount of RAM needed to hold the whole dataset in memory.
In order to do so very used keys are taken in memory while the other keys
are swapped into a swap file, similarly to what operating systems do
with memory pages.
To enable VM just set 'vm-enabled' to yes, and set the following three
VM parameters accordingly to your needs.
是否開啟虛擬內存支持。因為 redis 是一個內存數據庫,而且當內存滿的時候,無法接收新的寫請求,所以在redis2.0中,提供了虛擬內存的支持。
但是需要注意的是,redis中,所有的key都會放在內存中,在內存不夠時,只會把value 值放入交換區。
這樣保證了雖然使用虛擬內存,但性能基本不受影響,同時,你需要注意的是你要把vm-max-memory設置到足夠來放下你的所有的key
vm-enabled no
vm-enabled yes
This is the path of the Redis swap file. As you can guess, swap files
can't be shared by different Redis instances, so make sure to use a swap
file for every redis process you are running. Redis will complain if the
swap file is already in use.
The best kind of storage for the Redis swap file (that's accessed at random)
is a Solid State Disk (SSD).
*** WARNING *** if you are using a shared hosting the default of putting
the swap file under /tmp is not secure. Create a dir with access granted
only to Redis user and configure Redis to create the swap file there.
設置虛擬內存的交換文件路徑
vm-swap-file /tmp/redis.swap
vm-max-memory configures the VM to use at max the specified amount of
RAM. Everything that deos not fit will be swapped on disk if possible, that
is, if there is still enough contiguous space in the swap file.
With vm-max-memory 0 the system will swap everything it can. Not a good
default, just specify the max amount of RAM you can in bytes, but it's
better to leave some margin. For instance specify an amount of RAM
that's more or less between 60 and 80% of your free RAM.
這里設置開啟虛擬內存之后,redis將使用的最大物理內存的大小。默認為0,redis將把他所有的能放到交換文件的都放到交換文件中,以盡量少的使用物理內存。
在生產環境下,需要根據實際情況設置該值,最好不要使用默認的 0
vm-max-memory 0
Redis swap files is split into pages. An object can be saved using multiple
contiguous pages, but pages can't be shared between different objects.
So if your page is too big, small objects swapped out on disk will waste
a lot of space. If you page is too small, there is less space in the swap
file (assuming you configured the same number of total swap file pages).
If you use a lot of small objects, use a page size of 64 or 32 bytes.
If you use a lot of big objects, use a bigger page size.
If unsure, use the default
設置虛擬內存的頁大小,如果你的 value 值比較大,比如說你要在 value 中放置博客、新聞之類的所有文章內容,就設大一點,如果要放置的都是很小的內容,那就設小一點
vm-page-size 32
Number of total memory pages in the swap file.
Given that the page table (a bitmap of free/used pages) is taken in memory,
every 8 pages on disk will consume 1 byte of RAM.
The total swap size is vm-page-size * vm-pages
With the default of 32-bytes memory pages and 134217728 pages Redis will
use a 4 GB swap file, that will use 16 MB of RAM for the page table.
It's better to use the smallest acceptable value for your application,
but the default is large in order to work in most conditions.
設置交換文件的總的 page 數量,需要注意的是,page table信息會放在物理內存中,每8個page 就會占據RAM中的 1 個 byte。
總的虛擬內存大小 = vm-page-size * vm-pages
vm-pages 134217728
Max number of VM I/O threads running at the same time.
This threads are used to read/write data from/to swap file, since they
also encode and decode objects from disk to memory or the reverse, a bigger
number of threads can help with big objects even if they can't help with
I/O itself as the physical device may not be able to couple with many
reads/writes operations at the same time.
The special value of 0 turn off threaded I/O and enables the blocking
Virtual Memory implementation.
設置 VM IO 同時使用的線程數量。
vm-max-threads 4
Hashes are encoded in a special way (much more memory efficient) when they
have at max a given numer of elements, and the biggest element does not
exceed a given threshold. You can configure this limits with the following
configuration directives.
redis 2.0 中引入了 hash 數據結構。
hash 中包含超過指定元素個數并且最大的元素當沒有超過臨界時,hash 將以zipmap(又稱為 small hash大大減少內存使用)來存儲,這里可以設置這兩個臨界值
hash-max-zipmap-entries 512
hash-max-zipmap-value 64
Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
order to help rehashing the main Redis hash table (the one mapping top-level
keys to values). The hash table implementation redis uses (see dict.c)
performs a lazy rehashing: the more operation you run into an hash table
that is rhashing, the more rehashing "steps" are performed, so if the
server is idle the rehashing is never complete and some more memory is used
by the hash table.
The default is to use this millisecond 10 times every second in order to
active rehashing the main dictionaries, freeing memory when possible.
If unsure:
use "activerehashing no" if you have hard latency requirements and it is
not a good thing in your environment that Redis can reply form time to time
to queries with 2 milliseconds delay.
use "activerehashing yes" if you don't have such hard requirements but
want to free memory asap when possible.
開啟之后,redis 將在每 100 毫秒時使用 1 毫秒的 CPU 時間來對 redis 的 hash 表進行重新 hash,可以降低內存的使用。
當你的使用場景中,有非常嚴格的實時性需要,不能夠接受 Redis 時不時的對請求有 2 毫秒的延遲的話,把這項配置為 no。
如果沒有這么嚴格的實時性要求,可以設置為 yes,以便能夠盡可能快的釋放內存
activerehashing yes