你如果要對自己剛剛搭建好的redis做一個基準的壓測,測一下你的redis的性能和QPS(query per second)redis自己提供的redis-benchmark壓測工具,是最快捷最方便的,當然啦,這個工具比較簡單,用一些簡單的操作和場景去壓測1、對redis讀寫分離架構進行壓測,單實例寫QPS+單實例讀QPSredis-3.2.8/src./redis-benchmark -h 192.168.31.187-cNumber of parallel connections (default 50)-nTotal number of requests (default 100000)-d Data size of SET/GET value in bytes (default 2)
根據你自己的高峰期的訪問量,在高峰期,瞬時最大用戶量會達到10萬+,-c 100000,-n 10000000,-d 50
各種基準測試,直接出來
1核1G,虛擬機
====== PING_INLINE ======
? 100000 requests completed in 1.28 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
99.78% <= 1 milliseconds
99.93% <= 2 milliseconds
99.97% <= 3 milliseconds
100.00% <= 3 milliseconds
78308.54 requests per second
====== PING_BULK ======
? 100000 requests completed in 1.30 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
99.87% <= 1 milliseconds
100.00% <= 1 milliseconds
76804.91 requests per second
====== SET ======
? 100000 requests completed in 2.50 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
5.95% <= 1 milliseconds
99.63% <= 2 milliseconds
99.93% <= 3 milliseconds
99.99% <= 4 milliseconds
100.00% <= 4 milliseconds
40032.03 requests per second
====== GET ======
? 100000 requests completed in 1.30 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
99.73% <= 1 milliseconds
100.00% <= 2 milliseconds
100.00% <= 2 milliseconds
76628.35 requests per second
====== INCR ======
? 100000 requests completed in 1.90 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
80.92% <= 1 milliseconds
99.81% <= 2 milliseconds
99.95% <= 3 milliseconds
99.96% <= 4 milliseconds
99.97% <= 5 milliseconds
100.00% <= 6 milliseconds
52548.61 requests per second
====== LPUSH ======
? 100000 requests completed in 2.58 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
3.76% <= 1 milliseconds
99.61% <= 2 milliseconds
99.93% <= 3 milliseconds
100.00% <= 3 milliseconds
38684.72 requests per second
====== RPUSH ======
? 100000 requests completed in 2.47 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
6.87% <= 1 milliseconds
99.69% <= 2 milliseconds
99.87% <= 3 milliseconds
99.99% <= 4 milliseconds
100.00% <= 4 milliseconds
40469.45 requests per second
====== LPOP ======
? 100000 requests completed in 2.26 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
28.39% <= 1 milliseconds
99.83% <= 2 milliseconds
100.00% <= 2 milliseconds
44306.60 requests per second
====== RPOP ======
? 100000 requests completed in 2.18 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
36.08% <= 1 milliseconds
99.75% <= 2 milliseconds
100.00% <= 2 milliseconds
45871.56 requests per second
====== SADD ======
? 100000 requests completed in 1.23 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
99.94% <= 1 milliseconds
100.00% <= 2 milliseconds
100.00% <= 2 milliseconds
81168.83 requests per second
====== SPOP ======
? 100000 requests completed in 1.28 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
99.80% <= 1 milliseconds
99.96% <= 2 milliseconds
99.96% <= 3 milliseconds
99.97% <= 5 milliseconds
100.00% <= 5 milliseconds
78369.91 requests per second
====== LPUSH (needed to benchmark LRANGE) ======
? 100000 requests completed in 2.47 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
15.29% <= 1 milliseconds
99.64% <= 2 milliseconds
99.94% <= 3 milliseconds
100.00% <= 3 milliseconds
40420.37 requests per second
====== LRANGE_100 (first 100 elements) ======
? 100000 requests completed in 3.69 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
30.86% <= 1 milliseconds
96.99% <= 2 milliseconds
99.94% <= 3 milliseconds
99.99% <= 4 milliseconds
100.00% <= 4 milliseconds
27085.59 requests per second
====== LRANGE_300 (first 300 elements) ======
? 100000 requests completed in 10.22 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
0.03% <= 1 milliseconds
5.90% <= 2 milliseconds
90.68% <= 3 milliseconds
95.46% <= 4 milliseconds
97.67% <= 5 milliseconds
99.12% <= 6 milliseconds
99.98% <= 7 milliseconds
100.00% <= 7 milliseconds
9784.74 requests per second
====== LRANGE_500 (first 450 elements) ======
? 100000 requests completed in 14.71 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
0.00% <= 1 milliseconds
0.07% <= 2 milliseconds
1.59% <= 3 milliseconds
89.26% <= 4 milliseconds
97.90% <= 5 milliseconds
99.24% <= 6 milliseconds
99.73% <= 7 milliseconds
99.89% <= 8 milliseconds
99.96% <= 9 milliseconds
99.99% <= 10 milliseconds
100.00% <= 10 milliseconds
6799.48 requests per second
====== LRANGE_600 (first 600 elements) ======
? 100000 requests completed in 18.56 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
0.00% <= 2 milliseconds
0.23% <= 3 milliseconds
1.75% <= 4 milliseconds
91.17% <= 5 milliseconds
98.16% <= 6 milliseconds
99.04% <= 7 milliseconds
99.83% <= 8 milliseconds
99.95% <= 9 milliseconds
99.98% <= 10 milliseconds
100.00% <= 10 milliseconds
5387.35 requests per second
====== MSET (10 keys) ======
? 100000 requests completed in 4.02 seconds
? 50 parallel clients
? 3 bytes payload
? keep alive: 1
0.01% <= 1 milliseconds
53.22% <= 2 milliseconds
99.12% <= 3 milliseconds
99.55% <= 4 milliseconds
99.70% <= 5 milliseconds
99.90% <= 6 milliseconds
99.95% <= 7 milliseconds
100.00% <= 8 milliseconds
24869.44 requests per second
我們這個讀寫分離這一塊的第一講
大部分情況下來說,看你的服務器的機器性能和配置,機器越牛逼,配置越高
單機上十幾萬,單機上二十萬
很多公司里,給一些低配置的服務器,操作復雜度
大公司里,都是公司會提供統一的云平臺,比如京東、騰訊、BAT、其他的一些、小米、美團
虛擬機,低配
搭建一些集群,專門為某個項目,搭建的專用集群,4核4G內存,比較復雜的操作,數據比較大
幾萬,單機做到,差不多了
redis提供的高并發,至少到上萬,沒問題
幾萬~十幾萬/二十萬不等
QPS,自己不同公司,不同服務器,自己去測試,跟生產環境還有區別
生產環境,大量的網絡請求的調用,網絡本身就有開銷,你的redis的吞吐量就不一定那么高了
QPS的兩個殺手:一個是復雜操作,lrange,挺多的; value很大,2 byte,我之前用redis做大規模的緩存
做商品詳情頁的cache,可能是需要把大串數據,拼接在一起,作為一個json串,大小可能都幾k,幾個byte
2、水平擴容redis讀節點,提升度吞吐量
就按照上一節課講解的,再在其他服務器上搭建redis從節點,單個從節點讀請QPS在5萬左右,兩個redis從節點,所有的讀請求打到兩臺機器上去,承載整個集群讀QPS在10萬+