- Redis常見集群技術(shù)
長(zhǎng)期以來,Redis本身僅支持單實(shí)例,內(nèi)存一般最多10到20GB。這無法支撐大型線上業(yè)務(wù)系統(tǒng)的需求。而且也造成資源的利用率過低——畢竟現(xiàn)在服務(wù)器內(nèi)存動(dòng)輒100~200GB。
為解決單機(jī)承載能力不足的問題,各大互聯(lián)網(wǎng)企業(yè)紛紛出手,“自助式”地實(shí)現(xiàn)了集群機(jī)制。在這些非官方集群解決方案中,物理上把數(shù)據(jù)“分片”(sharding)存儲(chǔ)在多個(gè)Redis實(shí)例,一般情況下,每一“片”是一個(gè)Redis實(shí)例。
包括官方近期推出的Redis Cluster,Redis集群有三種實(shí)現(xiàn)機(jī)制,分別介紹如下,希望對(duì)大家選型有所幫助。
1.1 客戶端分片
這種方案將分片工作放在業(yè)務(wù)程序端,程序代碼根據(jù)預(yù)先設(shè)置的路由規(guī)則,直接對(duì)多個(gè)Redis實(shí)例進(jìn)行分布式訪問。這樣的好處是,不依賴于第三方分布式中間件,實(shí)現(xiàn)方法和代碼都自己掌控,可隨時(shí)調(diào)整,不用擔(dān)心踩到坑。
這實(shí)際上是一種靜態(tài)分片技術(shù)。Redis實(shí)例的增減,都得手工調(diào)整分片程序?;诖朔制瑱C(jī)制的開源產(chǎn)品,現(xiàn)在仍不多見。
這種分片機(jī)制的性能比代理式更好(少了一個(gè)中間分發(fā)環(huán)節(jié))。但缺點(diǎn)是升級(jí)麻煩,對(duì)研發(fā)人員的個(gè)人依賴性強(qiáng)——需要有較強(qiáng)的程序開發(fā)能力做后盾。如果主力程序員離職,可能新的負(fù)責(zé)人,會(huì)選擇重寫一遍。
所以,這種方式下,可運(yùn)維性較差。出現(xiàn)故障,定位和解決都得研發(fā)和運(yùn)維配合著解決,故障時(shí)間變長(zhǎng)。
這種方案,難以進(jìn)行標(biāo)準(zhǔn)化運(yùn)維,不太適合中小公司(除非有足夠的DevOPS)。
1.2 代理分片
這種方案,將分片工作交給專門的代理程序來做。代理程序接收到來自業(yè)務(wù)程序的數(shù)據(jù)請(qǐng)求,根據(jù)路由規(guī)則,將這些請(qǐng)求分發(fā)給正確的Redis實(shí)例并返回給業(yè)務(wù)程序。
這種機(jī)制下,一般會(huì)選用第三方代理程序(而不是自己研發(fā)),因?yàn)楹蠖擞卸鄠€(gè)Redis實(shí)例,所以這類程序又稱為分布式中間件。
這樣的好處是,業(yè)務(wù)程序不用關(guān)心后端Redis實(shí)例,運(yùn)維起來也方便。雖然會(huì)因此帶來些性能損耗,但對(duì)于Redis這種內(nèi)存讀寫型應(yīng)用,相對(duì)而言是能容忍的。
這是我們推薦的集群實(shí)現(xiàn)方案。像基于該機(jī)制的開源產(chǎn)品Twemproxy,便是其中代表之一,應(yīng)用非常廣泛。
1.3 Redis Cluster
在這種機(jī)制下,沒有中心節(jié)點(diǎn)(和代理模式的重要不同之處)。所以,一切開心和不開心的事情,都將基于此而展開。
Redis Cluster將所有Key映射到16384個(gè)Slot中,集群中每個(gè)Redis實(shí)例負(fù)責(zé)一部分,業(yè)務(wù)程序通過集成的Redis Cluster客戶端進(jìn)行操作??蛻舳丝梢韵蛉我粚?shí)例發(fā)出請(qǐng)求,如果所需數(shù)據(jù)不在該實(shí)例中,則該實(shí)例引導(dǎo)客戶端自動(dòng)去對(duì)應(yīng)實(shí)例讀寫數(shù)據(jù)。
Redis Cluster的成員管理(節(jié)點(diǎn)名稱、IP、端口、狀態(tài)、角色)等,都通過節(jié)點(diǎn)之間兩兩通訊,定期交換并更新。
由此可見,這是一種非?!爸亍钡姆桨?。已經(jīng)不是Redis單實(shí)例的“簡(jiǎn)單、可依賴”了。可能這也是延期多年之后,才近期發(fā)布的原因之一。
這令人想起一段歷史。因?yàn)镸emcache不支持持久化,所以有人寫了一個(gè)Membase,后來改名叫Couchbase,說是支持Auto Rebalance,好幾年了,至今都沒多少家公司在使用。
這是個(gè)令人憂心忡忡的方案。為解決仲裁等集群管理的問題,Oracle RAC還會(huì)使用存儲(chǔ)設(shè)備的一塊空間。而Redis Cluster,是一種完全的去中心化……
- Codis實(shí)踐
Codis由豌豆莢于2014年11月開源,基于Go和C開發(fā),是近期涌現(xiàn)的、國(guó)人開發(fā)的優(yōu)秀開源軟件之一?,F(xiàn)已廣泛用于豌豆莢的各種Redis業(yè)務(wù)場(chǎng)景(已得到豌豆莢@劉奇同學(xué)的確認(rèn),呵呵)。
從3個(gè)月的各種壓力測(cè)試來看,穩(wěn)定性符合高效運(yùn)維的要求。性能更是改善很多,最初比Twemproxy慢20%;現(xiàn)在比Twemproxy快近100%(條件:多實(shí)例,一般Value長(zhǎng)度)。
3.1 體系架構(gòu)
Codis引入了Group的概念,每個(gè)Group包括1個(gè)Redis Master及至少1個(gè)Redis Slave,這是和Twemproxy的區(qū)別之一。這樣做的好處是,如果當(dāng)前Master有問題,則運(yùn)維人員可通過Dashboard“自助式”切換到Slave,而不需要小心翼翼地修改程序配置文件。
為支持?jǐn)?shù)據(jù)熱遷移(Auto Rebalance),出品方修改了Redis Server源碼,并稱之為Codis Server。
Codis采用預(yù)先分片(Pre-Sharding)機(jī)制,事先規(guī)定好了,分成1024個(gè)slots(也就是說,最多能支持后端1024個(gè)Codis Server),這些路由信息保存在ZooKeeper中。ZooKeeper還維護(hù)Codis Server Group信息,并提供分布式鎖等服務(wù)。
Codis 由四部分組成:
Codis Proxy (codis-proxy)
codis-proxy 是客戶端連接的 Redis 代理服務(wù), codis-proxy 本身實(shí)現(xiàn)了 Redis 協(xié)議, 表現(xiàn)得和一個(gè)原生的 Redis 沒什么區(qū)別 (就像 Twemproxy), 對(duì)于一個(gè)業(yè)務(wù)來說, 可以部署多個(gè) codis-proxy, codis-proxy 本身是無狀態(tài)的.
Codis Manager (codis-config)
codis-config 是 Codis 的管理工具, 支持包括, 添加/刪除 Redis 節(jié)點(diǎn), 添加/刪除 Proxy 節(jié)點(diǎn), 發(fā)起數(shù)據(jù)遷移等操作. codis-config 本身還自帶了一個(gè) http server, 會(huì)啟動(dòng)一個(gè) dashboard, 用戶可以直接在瀏覽器上觀察 Codis 集群的運(yùn)行狀態(tài).
Codis Redis (codis-server)
codis-server 是 Codis 項(xiàng)目維護(hù)的一個(gè) Redis 分支, 基于 2.8.13 開發(fā), 加入了 slot 的支持和原子的數(shù)據(jù)遷移指令. Codis 上層的 codis-proxy 和 codis-config 只能和這個(gè)版本的 Redis 交互才能正常運(yùn)行.
ZooKeeper
Codis 依賴 ZooKeeper 來存放數(shù)據(jù)路由表和 codis-proxy 節(jié)點(diǎn)的元信息, codis-config 發(fā)起的命令都會(huì)通過 ZooKeeper 同步到各個(gè)存活的 codis-proxy.
如果希望有更簡(jiǎn)單的訪問方式和更高的穩(wěn)定性:可以使用搭建HAProxy,但是會(huì)進(jìn)一步增加延遲。
實(shí)驗(yàn)環(huán)境:
centos7.3 主機(jī)一臺(tái)
軟件:go1.7rc3.linux-amd64.tar codis-release3.2 zookeeper-3.4.6.tar jdk-8u131-linux-x64.tar
步驟1:安裝golang ,codis是基于golang語(yǔ)言開發(fā)。
codis-proxy相當(dāng)于redis,即連接codis-proxy和連接redis是沒有任何區(qū)別的,codis-proxy無狀態(tài),不負(fù)責(zé)記錄是否在哪保存,數(shù)據(jù)在zookeeper記錄,即codis proxy向zookeeper查詢key的記錄位置,proxy 將請(qǐng)求轉(zhuǎn)發(fā)到一個(gè)組進(jìn)行處理,一個(gè)組里面有一個(gè)master和一個(gè)或者多個(gè)slave組成,默認(rèn)有1024個(gè)槽位,redis cluster 默認(rèn)有16384個(gè)槽位,其把不同的槽位的內(nèi)容放在不通的group。
部署環(huán)境:3臺(tái)服務(wù)器
[root@localhost ~]#yum install -y gcc glibc gcc-c++ make git #安裝需要的依賴包
[root@localhost ~]#tar xvf go1.7rc3.linux-amd64.tar
[root@localhost ~]#mv go1.7rc3.linux-amd64.tar.gz /root/
[root@localhost ~]#mv go/ /usr/local/
[root@localhost ~]#mkdir /usr/local/go/work
[root@localhost ~]#vim /etc/profile.d/env.sh
export PATH=$PATH:/usr/local/go/bin
export GOROOT=/usr/local/go
export GOPATH=/usr/local/go/work
path=$PATH:$HOME/bin:$GOROOT/bin:$GOPATH/bin
[root@localhost ~]#. /etc/profile.d/env.sh #生效環(huán)境變量
[root@localhost ~]# echo $GOPATH
/usr/local/go/work
[root@localhost ~]# go version
go version go1.7rc3 linux/amd64
codis的環(huán)境已搭建好
每臺(tái)服務(wù)器安裝java環(huán)境和zookeeper,zookeeper集群最少需要3臺(tái)服務(wù)器,推薦5臺(tái),因?yàn)閦ookeeper是基于java開發(fā)的:
[root@localhost ~]#tar zxf jdk-8u131-linux-x64.tar.gz
[root@localhost ~]#mv jdk1.8.0_131/ /usr/local/
[root@localhost ~]#vim /etc/profile.d/jdk.sh
export JAVA_HOME=/usr/local/jdk1.8.0_131
export PATH=$JAVA_HOME/bin:$PATH
[root@localhost ~]#./etc/profile.d/jdk.sh
[root@localhost ~]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
[root@localhost ~]# echo $JAVA_HOME
/usr/local/jdk1.8.0_131
[root@localhost ~]# tar zxf zookeeper-3.4.6.tar.gz
[root@localhost ~]# mv zookeeper-3.4.6 /usr/local/
[root@localhost ~]#ln -sv /usr/local/zookeeper-3.4.6/ /usr/local/zookeeper
‘/usr/local/zookeeper’ -> ‘/usr/local/zookeeper-3.4.6/’
[root@localhost ~]#cp /opt/
[root@localhost ~]#mkdir zk1 zk2 zk3
#準(zhǔn)備zookeeper 服務(wù)ID,每個(gè)服務(wù)器的ID是不同的
[root@localhost opt]# ll
total 20
drwxr-xr-x. 3 root root 78 Sep 17 22:21 zk1
drwxr-xr-x. 3 root root 78 Sep 17 22:23 zk2
drwxr-xr-x. 3 root root 78 Sep 17 22:24 zk3
[root@localhost opt]#echo 1 > zk1/myid
[root@localhost opt]#echo 2 > zk2/myid
[root@localhost opt]#echo 3 > zk3/myid
[root@localhost opt]#cp /usr/local/zookeeper-3.4.6/conf/zoo_sample.cfg /opt/zk1/zk1.cfg
步驟二:
配置三個(gè)zookeeper節(jié)點(diǎn)的配置文件,一下為節(jié)點(diǎn)1
[root@localhost opt]#cat /opt/zk1/zk1.cfg
# The number of milliseconds of each tick
tickTime=6000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=10
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zk1
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
server.1=192.168.100.201:2887:3887
server.2=192.168.100.201:2888:3888
server.3=192.168.100.201:2889:3889
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
節(jié)點(diǎn)2:
[root@localhost opt]#cat /opt/zk2/zk2.cfg
# The number of milliseconds of each tick
tickTime=6000
# The number of ticks that the initial
# synchronization phase can take
initLimit=20
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=10
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zk2
# the port at which the clients will connect
clientPort=2182
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
server.1=192.168.100.201:2887:3887
server.2=192.168.100.201:2888:3888
server.3=192.168.100.201:2889:3889
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
節(jié)點(diǎn)3
[root@localhost opt]#cat /opt/zk3/zk3.cfg
# The number of milliseconds of each tick
tickTime=6000
# The number of ticks that the initial
# synchronization phase can take
initLimit=20
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=10
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zk3
# the port at which the clients will connect
clientPort=2183
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
server.1=192.168.100.201:2887:3887
server.2=192.168.100.201:2888:3888
server.3=192.168.100.201:2889:3889
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
參數(shù)解析
tickTime:這個(gè)時(shí)間是作為 Zookeeper 服務(wù)器之間或客戶端與服務(wù)器之間維持心跳的時(shí)間間隔,也就是每個(gè) tickTime 時(shí)間就會(huì)發(fā)送一個(gè)心跳。
dataDir:顧名思義就是 Zookeeper 保存數(shù)據(jù)的目錄,默認(rèn)情況下,Zookeeper 將寫數(shù)據(jù)的日志文件也保存在這個(gè)目錄里。
clientPort:這個(gè)端口就是客戶端連接 Zookeeper 服務(wù)器的端口,Zookeeper 會(huì)監(jiān)聽這個(gè)端口,接受客戶端的訪問請(qǐng)求。
initLimit:這個(gè)配置項(xiàng)是用來配置 Zookeeper 接受客戶端(這里所說的客戶端不是用戶連接 Zookeeper 服務(wù)器的客戶端,而是 Zookeeper 服務(wù)器集群中連接到 Leader 的 Follower 服務(wù)器)初始化連接時(shí)最長(zhǎng)能忍受多少個(gè)心跳時(shí)間間隔數(shù)。當(dāng)已經(jīng)超過 5個(gè)心跳的時(shí)間(也就是 tickTime)長(zhǎng)度后 Zookeeper 服務(wù)器還沒有收到客戶端的返回信息,那么表明這個(gè)客戶端連接失敗。總的時(shí)間長(zhǎng)度就是 10*6000=60 秒
syncLimit:這個(gè)配置項(xiàng)標(biāo)識(shí) Leader 與 Follower 之間發(fā)送消息,請(qǐng)求和應(yīng)答時(shí)間長(zhǎng)度,最長(zhǎng)不能超過多少個(gè) tickTime 的時(shí)間長(zhǎng)度,總的時(shí)間長(zhǎng)度就是 5*6000=30 秒
server.A=B:C:D:其中 A 是一個(gè)數(shù)字,表示這個(gè)是第幾號(hào)服務(wù)器;B 是這個(gè)服務(wù)器的 ip 地址;C 表示的是這個(gè)服務(wù)器與集群中的 Leader 服務(wù)器交換信息的端口;D 表示的是萬(wàn)一集群中的 Leader 服務(wù)器掛了,需要一個(gè)端口來重新進(jìn)行選舉,選出一個(gè)新的 Leader,而這個(gè)端口就是用來執(zhí)行選舉時(shí)服務(wù)器相互通信的端口。如果是偽集群的配置方式,由于 B 都是一樣,所以不同的 Zookeeper 實(shí)例通信端口號(hào)不能一樣,所以要給它們分配不同的端口號(hào)。
[root@localhost ~]# /usr/local/zookeeper/bin/zkServer.sh start /opt/zk1/zk1.cfg
/usr/local/zookeeper/bin/zkServer.sh start /opt/zk2/zk2.cfg
/usr/local/zookeeper/bin/zkServer.sh start /opt/zk3/zk3.cfg
[root@localhost opt]# ss -tnlp|grep 218*
LISTEN 0 128 :::18080 :::* users:(("codis-dashboard",pid=9721,fd=6))
LISTEN 0 50 :::2181 :::* users:(("java",pid=2267,fd=23))
LISTEN 0 50 :::2182 :::* users:(("java",pid=2552,fd=23))
LISTEN 0 50 :::2183 :::* users:(("java",pid=2594,fd=23))
查看各節(jié)點(diǎn)的zookeeper的狀態(tài)信息
[root@localhost ~]# /usr/local/zookeeper/bin/zkServer.sh status /opt/zk1/zk1.cfg
JMX enabled by default
Using config: /opt/zk1/zk1.cfg
Mode: follower
[root@localhost ~]# /usr/local/zookeeper/bin/zkServer.sh status /opt/zk2/zk2.cfg
JMX enabled by default
Using config: /opt/zk2/zk2.cfg
Mode: leader
[root@localhost ~]# /usr/local/zookeeper/bin/zkServer.sh status /opt/zk3/zk3.cfg
JMX enabled by default
Using config: /opt/zk3/zk3.cfg
Mode: follower
步驟三:
安裝codis3.2
[root@localhost ~]# mkdir -p $GOPATH/src/github.com/CodisLabs
[root@localhost ~]#cd /usr/local/go/work/src/github.com/CodisLabs
[root@localhost codis]# yum install autoconf automake libtool -y #安裝編譯過程中依賴的包,否則會(huì)編譯失敗。
[root@localhost CodisLabs]# unzip codis-release3.2.zip
[root@localhost CodisLabs]#mv unzip codis-release3.2.zip codis
[root@localhost CodisLabs]#cd codis
[root@localhost codis]#make
[root@localhost codis]#make gotest # 測(cè)試是否成功
go test ./cmd/... ./pkg/...
? github.com/CodisLabs/codis/cmd/admin [no test files]
? github.com/CodisLabs/codis/cmd/dashboard [no test files]
? github.com/CodisLabs/codis/cmd/fe [no test files]
? github.com/CodisLabs/codis/cmd/proxy [no test files]
? github.com/CodisLabs/codis/pkg/models [no test files]
? github.com/CodisLabs/codis/pkg/models/etcd [no test files]
? github.com/CodisLabs/codis/pkg/models/fs [no test files]
? github.com/CodisLabs/codis/pkg/models/zk [no test files]
ok github.com/CodisLabs/codis/pkg/proxy 2.525s
ok github.com/CodisLabs/codis/pkg/proxy/redis 0.530s
ok github.com/CodisLabs/codis/pkg/topom 6.560s
ok github.com/CodisLabs/codis/pkg/utils 0.009s
? github.com/CodisLabs/codis/pkg/utils/assert [no test files]
ok github.com/CodisLabs/codis/pkg/utils/bufio2 0.006s
ok github.com/CodisLabs/codis/pkg/utils/bytesize 0.004s
? github.com/CodisLabs/codis/pkg/utils/errors [no test files]
? github.com/CodisLabs/codis/pkg/utils/log [no test files]
ok github.com/CodisLabs/codis/pkg/utils/math2 0.002s
? github.com/CodisLabs/codis/pkg/utils/redis [no test files]
? github.com/CodisLabs/codis/pkg/utils/rpc [no test files]
? github.com/CodisLabs/codis/pkg/utils/sync2 [no test files]
? github.com/CodisLabs/codis/pkg/utils/sync2/atomic2 [no test files]
ok github.com/CodisLabs/codis/pkg/utils/timesize 0.009s
? github.com/CodisLabs/codis/pkg/utils/trace [no test files]
ok github.com/CodisLabs/codis/pkg/utils/unsafe2 0.003s
默認(rèn)啟動(dòng)的會(huì)讀取config目錄的dashboard.toml文件,編輯如下:vim /usr/local/go/work/src/github.com/CodisLabs/codis/config/dashboard.toml
[root@localhost config]#vim dashboard.toml
##################################################
# #
# Codis-Dashboard #
# #
##################################################
# Set Coordinator, only accept "zookeeper" & "etcd" & "filesystem".
# Quick Start
coordinator_name = "zookeeper"
coordinator_addr = "192.168.100.201:2181,192.168.100.201:2182,192.168.100.201:2183"
#coordinator_name = "zookeeper"
#coordinator_addr = "127.0.0.1:2181"
# Set Codis Product Name/Auth.
product_name = "codis-demo"
product_auth = ""
# Set bind address for admin(rpc), tcp only.
admin_addr = "0.0.0.0:18080"
# Set arguments for data migration (only accept 'sync' & 'semi-async').
migration_method = "semi-async"
migration_parallel_slots = 100
migration_async_maxbulks = 200
migration_async_maxbytes = "32mb"
migration_async_numkeys = 500
migration_timeout = "30s"
# Set configs for redis sentinel.
sentinel_client_timeout = "10s"
sentinel_quorum = 2
sentinel_parallel_syncs = 1
sentinel_down_after = "30s"
sentinel_failover_timeout = "5m"
sentinel_notification_script = ""
sentinel_client_reconfig_script = ""
啟動(dòng)dashboard:
[root@localhost codis]# nohup ./bin/codis-dashboard --ncpu=1 --config=config/dashboard.toml --log=dashboard.log --log-level=WARN >> /var/log/codis_dashboard.log &
默認(rèn)配置文件獲取方式:./bin/codis-dashboard --default-config | tee dashboard.toml
啟動(dòng)代理
[root@localhost codis]#nohup ./bin/codis-proxy --ncpu=1 --config=config/proxy.toml --log=proxy.log --log-level=WARN >> /var/log/codis_proxy.log &
[root@localhost config]# vim proxy.toml
##################################################
# #
# Codis-Proxy #
# #
##################################################
# Set Codis Product Name/Auth.
product_name = "codis-demo" #管理界面的名稱
product_auth = "123456" #認(rèn)證密碼
# Set auth for client session
# 1. product_auth is used for auth validation among codis-dashboard,
# codis-proxy and codis-server.
# 2. session_auth is different from product_auth, it requires clients
# to issue AUTH <PASSWORD> before processing any other commands.
session_auth = ""
# Set bind address for admin(rpc), tcp only.
admin_addr = "0.0.0.0:11080"
# Set bind address for proxy, proto_type can be "tcp", "tcp4", "tcp6", "unix" or "unixpacket".
proto_type = "tcp4"
proxy_addr = "0.0.0.0:19000"
# Set jodis address & session timeout
# 1. jodis_name is short for jodis_coordinator_name, only accept "zookeeper" & "etcd".
# 2. jodis_addr is short for jodis_coordinator_addr
# 3. proxy will be registered as node:
# if jodis_compatible = true (not suggested):
# /zk/codis/db_{PRODUCT_NAME}/proxy-{HASHID} (compatible with Codis2.0)
# or else
# /jodis/{PRODUCT_NAME}/proxy-{HASHID}
jodis_name = "zookeeper"
jodis_addr = "192.168.100.201:2181,192.168.100.201:2182,192.168.100.201:2183" #zookeeper的地址
jodis_timeout = "20s"
jodis_compatible = true #false改成trun
# Set datacenter of proxy.
proxy_datacenter = ""
# Set max number of alive sessions.
proxy_max_clients = 1000
# Set max offheap memory size. (0 to disable)
proxy_max_offheap_size = "1024mb"
# Set heap placeholder to reduce GC frequency.
proxy_heap_placeholder = "256mb"
# Proxy will ping backend redis (and clear 'MASTERDOWN' state) in a predefined interval. (0 to disable)
backend_ping_period = "5s"
# Set backend recv buffer size & timeout.
backend_recv_bufsize = "128kb"
backend_recv_timeout = "30s"
# Set backend send buffer & timeout.
backend_send_bufsize = "128kb"
backend_send_timeout = "30s"
# Set backend pipeline buffer size.
backend_max_pipeline = 20480
# Set backend never read replica groups, default is false
backend_primary_only = false
# Set backend parallel connections per server
backend_primary_parallel = 1
backend_replica_parallel = 1
# Set backend tcp keepalive period. (0 to disable)
backend_keepalive_period = "75s"
# Set number of databases of backend.
backend_number_databases = 16
# If there is no request from client for a long time, the connection will be closed. (0 to disable)
# Set session recv buffer size & timeout.
session_recv_bufsize = "128kb"
session_recv_timeout = "30m"
# Set session send buffer size & timeout.
session_send_bufsize = "64kb"
session_send_timeout = "30s"
# Make sure this is higher than the max number of requests for each pipeline request, or your client may be blocked.
# Set session pipeline buffer size.
session_max_pipeline = 10000
# Set session tcp keepalive period. (0 to disable)
session_keepalive_period = "75s"
# Set session to be sensitive to failures. Default is false, instead of closing socket, proxy will send an error response to client.
session_break_on_failure = false
# Set metrics server (such as http://localhost:28000), proxy will report json formatted metrics to specified server in a predefined period.
metrics_report_server = ""
metrics_report_period = "1s"
# Set influxdb server (such as http://localhost:8086), proxy will report metrics to influxdb.
metrics_report_influxdb_server = ""
metrics_report_influxdb_period = "1s"
metrics_report_influxdb_username = ""
metrics_report_influxdb_password = ""
metrics_report_influxdb_database = ""
# Set statsd server (such as localhost:8125), proxy will report metrics to statsd.
metrics_report_statsd_server = ""
metrics_report_statsd_period = "1s"
metrics_report_statsd_prefix = ""
添加管理接口:
./bin/codis-admin --dashboard=192.168.100.201:18080 --create-proxy -x 192.168.100.201:11080
步驟四:?jiǎn)?dòng)codis-server,即創(chuàng)建redis實(shí)例(此處我們創(chuàng)建4個(gè)redis實(shí)例,給予codis修改過的redis-3.2.8非原生redis)
[root@localhost~]# mkdir -pv /var/lib/redis_638{1..4}
mkdir: created directory ‘/var/lib/redis_6381’
mkdir: created directory ‘/var/lib/redis_6382’
mkdir: created directory ‘/var/lib/redis_6383’
mkdir: created directory ‘/var/lib/redis_6384’
[root@localhost ~]# pwd
/usr/local/go/work/src/github.com/CodisLabs/codis/extern/redis-3.2.8
[root@localhost ~ redis-3.2.8]# cp redis.conf /usr/local/go/work/src/github.com/CodisLabs/codis/
[root@localhost ~ redis-3.2.8]## cd /usr/local/go/work/src/github.com/CodisLabs/codis/
修改redis.conf
pidfile /var/run/redis_6381.pid
port 6381
dbfilename dump_6381.rdb
dir /var/lib/redis_6381
logfile "/tmp/redis_6381.log"
maxmemory 1g #一定要設(shè)置最大內(nèi)存,否則后面的codis無法使用
[root@node1 codis]# cp redis.conf redis_6381.conf
[root@localhost codis]# cp redis_6381.conf redis_6382.conf
[root@localhost codis]# cp redis_6381.conf redis_6383.conf
[root@localhost codis]# cp redis_6381.conf redis_6384.conf
[root@localhost codis]# sed -i 's/6381/6382/g' redis_6382.conf
[root@localhost codis]# sed -i 's/6381/6383/g' redis_6383.conf
[root@localhost codis]# sed -i 's/6381/6384/g' redis_6384.conf
通過codis-server指定redis.conf文件啟動(dòng)redis服務(wù),不能通過redis命令啟動(dòng)redis服務(wù),通過redis啟動(dòng)的redis 服務(wù)加到codis集群無法正常使用:
[root@localhost codis]# ./bin/codis-server ./redis_6381.conf
[root@localhost codis]# ./bin/codis-server ./redis_6382.conf
[root@localhost codis]# ./bin/codis-server ./redis_6383.conf
[root@localhost codis]# ./bin/codis-server ./redis_6384.conf
驗(yàn)證是否成功
[root@localhost codis]# ss -tnlp|grep 638*
LISTEN 0 128 *:6381 *:* users:(("codis-server",pid=9850,fd=4))
LISTEN 0 128 *:6382 *:* users:(("codis-server",pid=9855,fd=4))
LISTEN 0 128 *:6383 *:* users:(("codis-server",pid=9861,fd=4))
LISTEN 0 128 *:6384 *:* users:(("codis-server",pid=9866,fd=4))
啟動(dòng)codis-fe,Listen監(jiān)聽端口不要為8080,指定8090
[root@localhost codis]#nohup ./bin/codis-fe --ncpu=1 --log=fe.log --log-level=WARN --zookeeper=192.168.3.198:2181 --listen=192.168.3.198:8090 &
通過網(wǎng)頁(yè)訪問:http://192.168.100.201:8090 可以看到codis的管理頁(yè)面