基于CentOS 7 x86_64
可參考:https://blog.csdn.net/dylloveyou/article/details/79054120
一、簡單介紹
Ceph的部署模式下主要包含以下幾個(gè)類型的節(jié)點(diǎn)
? Ceph OSDs: A Ceph
OSD 進(jìn)程主要用來存儲(chǔ)數(shù)據(jù),處理數(shù)據(jù)的replication,恢復(fù),填充,調(diào)整資源組合以及通過檢查其他OSD進(jìn)程的心跳信息提供一些監(jiān)控信息給Ceph Monitors . 當(dāng)Ceph Storage Cluster 要準(zhǔn)備2份數(shù)據(jù)備份時(shí),要求至少有2個(gè)Ceph OSD進(jìn)程的狀態(tài)是active+clean狀態(tài) (Ceph 默認(rèn)會(huì)提供兩份數(shù)據(jù)備份).
? Monitors: Ceph
Monitor 維護(hù)了集群map的狀態(tài),主要包括monitor map, OSD map, Placement Group (PG) map, 以及CRUSH map. Ceph 維護(hù)了 Ceph Monitors, Ceph OSD Daemons, 以及PGs狀態(tài)變化的歷史記錄 (called an
“epoch”).
? MDSs: Ceph
Metadata Server (MDS)存儲(chǔ)的元數(shù)據(jù)代表Ceph的文件系統(tǒng) (i.e., Ceph Block
Devices 以及Ceph Object Storage 不適用 MDS). Ceph Metadata Servers 讓系統(tǒng)用戶可以執(zhí)行一些POSIX文件系統(tǒng)的基本命令,例如ls,find 等.
二、節(jié)點(diǎn)規(guī)劃
節(jié)點(diǎn)規(guī)劃
主機(jī)名IP角色ceph版本
Server-200192.168.100.200Admin/monitorjewel
Server-201192.168.100.201Osd,/dev/sdb,/dev/sdc,/dev/sddjewel
Server-202192.168.100.202Osd,/dev/sdb,/dev/sdc,/dev/sddjewel
Server-203192.168.100.203Osd,/dev/sdb,/dev/sdc,/dev/sddjewel
三、預(yù)部署
配置各所有節(jié)點(diǎn)hosts
192.168.100.200server-200
192.168.100.201server-201
192.168.100.202server-202
192.168.100.203server-203
關(guān)閉防火墻和selinux, 確保時(shí)間一致
所有節(jié)點(diǎn)安裝ntpd,確保時(shí)間一致
更新yum 為國內(nèi)阿里源
#rm -rf/etc/yum.repos.d/*.repo
#wget -O/etc/yum.repos.d/CentOS-Base.repohttp://mirrors.aliyun.com/repo/Centos-7.repo
#wget -O/etc/yum.repos.d/epel.repohttp://mirrors.aliyun.com/repo/epel-7.repo
#sed -i'/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo
#sed -i's/$releasever/7/g' /etc/yum.repos.d/CentOS-Base.repo
#sed -i'/aliyuncs/d' /etc/yum.repos.d/epel.repo
#yum clean all
#yum makecache fast
admin節(jié)點(diǎn) 安裝部署工具 ceph-deploy
#yum installhttp://mirrors.163.com/ceph/rpm-jewel/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm
所有節(jié)點(diǎn)創(chuàng)建cephd用戶,并確保 管理節(jié)點(diǎn) ssh秘鑰無密碼登錄,添加sudo權(quán)限
#echo "cephdALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephd
測試沒有問題,接下來,修改 ceph-deploy 管理節(jié)點(diǎn)上的 ~/.ssh/config 文件,這樣無需每次執(zhí)行 ceph-deploy 都要指定 –username cephd 。這樣做同時(shí)也簡化了 ssh 和 scp 的用法。
$vim .ssh/config
Host server-200
Hostnameserver-200
Usercephd
Host server-201
Hostnameserver-201
Usercephd
Host server-202
Hostnameserver-202
Usercephd
Host server-203
Hostnameserver-203
Usercephd
$chmod 600.ssh/config
確保網(wǎng)絡(luò)暢通~
經(jīng)過上邊一系列的預(yù)檢設(shè)置后,我們就可以開始Ceph 存儲(chǔ)集群搭建了,集群結(jié)構(gòu)為 admin-node (ceph-deploy、Monitor)、node1(osd.1)、node2(osd.2)。首先要提一下的是,如果我們在安裝過程中出現(xiàn)了問題,需要重新操作的時(shí)候,例如想清理我搭建的這個(gè)集群的話,可以使用以下命令。
#
ceph-deploy (admin-node) 上執(zhí)行
清理配置
#ceph-deploypurgedata server-200 server-201 server-202 server-203
#ceph-deployforgetkeys
清理 Ceph 安裝包
#ceph-deploypurge server-200 server-201 server-202 server-203
四、部署集群
[if !supportLists]1.[endif]安裝ceph
我們需要通過admin?ceph-deploy在各個(gè)節(jié)點(diǎn)安裝ceph
$ceph-deploy installserver-200 server-201 server-202 server-203
如果安裝失敗,可以手動(dòng)安裝
#vim /etc/yum.repos.d/ceph.repo
[Ceph]
name=Cephpackages for $basearch
baseurl=http://download.ceph.com/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-noarch]
name=Cephnoarch packages
baseurl=http://download.ceph.com/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[ceph-source]
name=Cephsource packages
baseurl=http://download.ceph.com/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[ceph]
name=Cephpackages for $basearch
baseurl=http://download.ceph.com/rpm-jewel/el7/$basearch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-noarch]
name=Cephnoarch packages
baseurl=http://download.ceph.com/rpm-jewel/el7/noarch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Cephsource packages
baseurl=http://download.ceph.com/rpm-jewel/el7/SRPMS
enabled=0
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
#yum install snappyleveldb gdisk python-argparse gperftools-libs
#yum install ceph
[if !supportLists]2.????[endif]創(chuàng)建集群,因暫只使用一個(gè)monitor,所以這里只初始admin監(jiān)控節(jié)點(diǎn)
#su - cephd
$mkdir ceph-cluster
$cd ceph-cluster
$ceph-deploy new server-200
[if !supportLists]3.[endif]添加監(jiān)控器
$ceph-deploy ?mon create server-200
$ceph-deploy moncreate-initial
執(zhí)行完畢后,會(huì)在當(dāng)前目錄下生成一系列的密鑰環(huán),應(yīng)該是各組件之間訪問所需要的認(rèn)證信息吧
到此,ceph monitor 已經(jīng)成功啟動(dòng)了。接下來需要?jiǎng)?chuàng)建 OSD 了,OSD 是最終數(shù)據(jù)存儲(chǔ)的地方,這里我們準(zhǔn)備了兩個(gè) OSD 節(jié)點(diǎn),分別為 osd.0 和 osd.1。官方建議為 OSD 及其日志使用獨(dú)立硬盤或分區(qū)作為存儲(chǔ)空間
$ceph-deploy osdprepare server-201:sdb:/dev/sdd server-201:sdc:/dev/sdd server-202:sdb:/dev/sddserver-202:sdc:/dev/sdd server-203:sdb:/dev/sdd server-203:sdc:/dev/sdd
$ceph-deploy osdactivate server-201:sdb1:/dev/sdd1 server-201:sdc1:/dev/sdd2server-202:sdb1:/dev/sdd1 server-202:sdc1:/dev/sdd2 server-203:sdb1:/dev/sdd1server-203:sdc1:/dev/sdd2
也可以考慮同步配置到其他節(jié)點(diǎn),這樣其他節(jié)點(diǎn)也可以查看
$ceph-deploy admin server-200server-201 server-202 server-203
$sudo chmod +r/etc/ceph/ceph.client.admin.keyring
$ceph -s