Ceph作為已經發展了10年的分布式存儲,目前已經有很多生產系統的實例,當中對OpenStack提供后端存儲這塊也是時下最流行也比較成熟的解決方案,經過一段時間的使用,對其結合自己實際系統做了一個簡單的應用總結。
現有的OpenStack環境當中Ceph只對接Cinder和部分Nova,根據需求不同,有的虛擬機跑在本地文件系統,有的則放在Ceph存儲當中。數據卷則統一由Ceph提供塊存儲。
與OpenStack集成
基層這塊Ceph官網已經提供健全的文檔,我這里就不多講了,直接貼上我的操作和配置。
- 安裝Ceph客戶端
$ yum install -y centos-release-ceph-jewel
$ yum clean all && yum makecache
$ yum install ceph python-rbd
- 配置客戶端認證
ceph auth get-or-create client.oepnstack mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=instances -o /etc/ceph/ceph.client.openstack.keyring'
我這里就統一對openstack賬號授權,官方建議是每個服務創建一個賬號。將ceph.conf
和ceph.client.openstack.keyring
這兩個文件拷貝到需要對接Ceph的Nova-compute和Cinder-volume節點的/etc/ceph目錄下
- 配置Libvirt
libvirt這塊可以參考之前的文章《Ceph in Libvirt and Kubernetes》,為保證平臺一致性,最好同步所有計算節點secret的UUID。
配置Cinder
cinder
在配置文件cinder.conf結尾追加一下配置
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
volume_clear_size = 100
rbd_user = openstack
rbd_secret_uuid = 881e72de-961f-4556-9e8c-0b909408186b #libvirt的secret id
同時修改cinder.conf中的
enabled_backends = ceph
nova
為了掛載 Cinder 塊設備(塊設備或者啟動卷),必須告訴 Nova 掛載設備時使用的用戶和 uuid 。libvirt會使用該用戶來和 Ceph 集群進行連接和認證。
rbd_user = openstack
rbd_secret_uuid = 881e72de-961f-4556-9e8c-0b909408186b
配置Nova
ceph
客戶端配置文件ceph.conf開啟RBD緩存和套接字,對于故障排查來說大有好處,給每個使用 Ceph 塊設備的虛擬機分配一個套接字有助于調查性能和/或異常行為。
[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20
調整目錄權限
mkdir -p /var/run/ceph/guests/ /var/log/qemu/
chown qemu:libvirtd /var/run/ceph/guests /var/log/qemu/
nova
調整nova.conf文件的[libvirt]
域下配置
#確保熱遷移能順利進行,要使用如下標志
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
#禁止文件注入
inject_password = false
inject_key = false
inject_partition = -2
disk_cachemodes ="network=writeback"
images_type=rbd
images_rbd_pool=instances
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user=openstack
rbd_secret_uuid=881e72de-961f-4556-9e8c-0b909408186b
玩法
- 通過鏡像創建Cinder卷
usage: cinder create [--consisgroup-id <consistencygroup-id>]
[--snapshot-id <snapshot-id>]
[--source-volid <source-volid>]
[--source-replica <source-replica>]
[--image-id <image-id>] [--image <image>] [--name <name>]
[--description <description>]
[--volume-type <volume-type>]
[--availability-zone <availability-zone>]
[--metadata [<key=value> [<key=value> ...]]]
[--hint <key=value>] [--allow-multiattach]
[<size>]
$ cinder create --image-id a02c0829-b198-4650-a9c6-7cc6b0b94018 --name jcloud.v1.0 60
+--------------------------------------+-------------+--------------------+------+-------------+----------+--------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-------------+--------------------+------+-------------+----------+--------------------------------------+
| ed63922f-f88c-4dab-9e8d-9670db8ee7b2 | downloading | jcloud.v1.0 | 60 | - | false | |
+--------------------------------------+-------------+--------------------+------+-------------+----------+--------------------------------------+
可以看到剛剛創建的卷,會從glance中下線鏡像再導入到rbd塊中,等導入完成之后這個卷就變成可一個引導的卷了,如下顯示:
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
| ed63922f-f88c-4dab-9e8d-9670db8ee7b2 | available | jcloud.v1.0 | 60 | - | true | |
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
- 對鏡像做快照
usage: cinder snapshot-create [--force [<True|False>]] [--name <name>]
[--description <description>]
[--metadata [<key=value> [<key=value> ...]]]
<volume>
$ cinder snapshot-create --name jcloud.v1.0_snapshot ed63922f-f88c-4dab-9e8d-9670db8ee7b2
+-------------+--------------------------------------+
| Property | Value |
+-------------+--------------------------------------+
| created_at | 2017-03-22T07:52:28.887130 |
| description | None |
| id | 6be98fec-bfee-4550-99a6-358e2b8b6609 |
| metadata | {} |
| name | jcloud.v1.0_snapshot |
| size | 60 |
| status | creating |
| updated_at | None |
| volume_id | ed63922f-f88c-4dab-9e8d-9670db8ee7b2 |
+-------------+--------------------------------------+
- 通過快照創建虛擬機
usage: nova boot [--flavor <flavor>] [--image <image>]
[--image-with <key=value>] [--boot-volume <volume_id>]
[--snapshot <snapshot_id>] [--min-count <number>]
[--max-count <number>] [--meta <key=value>]
[--file <dst-path=src-path>] [--key-name <key-name>]
[--user-data <user-data>]
[--availability-zone <availability-zone>]
[--security-groups <security-groups>]
[--block-device-mapping <dev-name=mapping>]
[--block-device key1=value1[,key2=value2...]]
[--swap <swap_size>]
[--ephemeral size=<size>[,format=<format>]]
[--hint <key=value>]
[--nic <net-id=net-uuid,net-name=network-name,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid>]
[--config-drive <value>] [--poll] [--admin-pass <value>]
[--access-ip-v4 <value>] [--access-ip-v6 <value>]
<name>
--block-device 參數
source=images|napshot|volume|blank
dest=volume|local
id=XXXXXX (a volume|image|snapshot UUID if using source=volume|snapshot|image)
format=swap|ext4|...|none (to format the image/volume/ephemeral file; defaults to 'none' if omitted)
bus=ide|usb|virtio|scsi (hypervisor driver chooses a suitable default if omitted)
device=the desired device name (e.g. /dev/vda, /dev/xda, ...)
type=disk|cdrom|floppy|mmc (defaults to 'disk' if omitted)
bootindex=N (where N is any number >= 0, controls the order in which disks are looked at for booting)
size=NN (where NN is number of GB to create type=emphemeral image, or the size to re-size to for type=glance|cinder)
shutdown=preserve|remove
這里面只有 source 和 id 是必須的,別的都有默認值。比如:
--block-device source=image,dest=volume,id=XXXXXXX,bus=ide,bootindex=2
--block-device source=volume,dest=volume,id=XXXXXXX,bus=ide,type=cdrom,bootdex=1
--block-device source=blank,dest=local,format=swap,size=50,bus=ide,type=floppy
dest 會指定source 的 destination,包括本地的(local)和 Cinder 卷 (volume)。
dest | sources | 說明 | shotcut |
---|---|---|---|
volume | volume | 直接掛載到 compute 節點 | 當 boot_index = 0 時相當于 --boot-volume <volume_id> |
snapshot | 調用 cinder 依據快照創建新卷,掛載到compute節點 | 當 boot_index = 0 時相當于 --snapshot <snapshot_id> | |
image | 調用cinder依據鏡像創建新卷,掛載到compute節點 | 當 boot_index = 0 時相當于 --image <image> (Boot from image (creates a new volume)) | |
blank | 通知cinder依大小創建空卷并掛載到compute節點 | ||
local | image | 在 Hypervisor 上創建 ephemeral 分區,將 image 拷貝到里面并啟動虛機 | 相當于普通的 Boot from image |
local | blank | format=swap時,創建swap分區,默認創建ephemeral分區 | 當 boot_index=-1, shutdown=remove, format=swap 時相當于 --swap <swap size in MB>,當 boot_index=-1, shutdown=remove 時相當于 --ephemeral |
通過快照創建虛擬機
nova boot --flavor ff29c42b-754d-4230-9e1f-9bdaba800f5e --snapshot 6be98fec-bfee-4550-99a6-358e2b8b6609 --security-groups default --nic net-id=163df3b0-13f2-4f2e-8401-e82088e8dc07 test
#或者
nova boot --flavor ff29c42b-754d-4230-9e1f-9bdaba800f5e --block-device source=snapshot,dest=volume,id=6be98fec-bfee-4550-99a6-358e2b8b6609,bootindex=0 --security-groups default --nic net-id=163df3b0-13f2-4f2e-8401-e82088e8dc07 test
跟蹤虛擬機狀態如下
#映射設備
+--------------------------------------+-------+--------+----------------------+-------------+--------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------+--------+----------------------+-------------+--------------------------+
| 52369db0-f7db-4eb0-a708-d77e250e3ecc | test | BUILD | block_device_mapping | NOSTATE | privite01=192.168.17.251 |
+--------------------------------------+-------+--------+----------------------+-------------+--------------------------+
#孵化
+--------------------------------------+-------+--------+------------+-------------+--------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------+--------+------------+-------------+--------------------------+
| 52369db0-f7db-4eb0-a708-d77e250e3ecc | test | BUILD | spawning | NOSTATE | privite01=192.168.17.251 |
+--------------------------------------+-------+--------+------------+-------------+--------------------------+
#啟動成功
+--------------------------------------+-------+--------+------------+-------------+--------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------+--------+------------+-------------+--------------------------+
| 52369db0-f7db-4eb0-a708-d77e250e3ecc | test | ACTIVE | - | Running | privite01=192.168.17.251 |
+--------------------------------------+-------+--------+------------+-------------+--------------------------+
快照創建成功后我們可以通過rbd命令去ceph的pool里面查詢volumes狀態。
- 首先看下cinder-volume的掛載
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
| 7398e12b-333b-4610-b75c-e237d164781d | in-use | | 60 | - | true | 52369db0-f7db-4eb0-a708-d77e250e3ecc |
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
7398e12b-333b-4610-b75c-e237d164781d
這個便是虛擬機系統盤所用的volumes了,有了這個uuid就可以在底層Ceph上對應上rbd塊了。
- 查下系統卷的信息
#根據volume的uuid查找rbd塊
$ rbd ls volumes --name client.libvirt |grep 7398e12b-333b-4610-b75c-e237d164781d
volume-7398e12b-333b-4610-b75c-e237d164781d
#查看塊信息
$ rbd info volumes/volume-7398e12b-333b-4610-b75c-e237d164781d --name client.libvirt
rbd image 'volume-7398e12b-333b-4610-b75c-e237d164781d':
size 61440 MB in 15360 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.872a41ece775
format: 2
features: layering, striping
flags:
parent: volumes/volume-ed63922f-f88c-4dab-9e8d-9670db8ee7b2@snapshot-6be98fec-bfee-4550-99a6-358e2b8b6609
overlap: 61440 MB
stripe unit: 4096 kB
stripe count: 1
#快照的信息
$ rbd info volumes/volume-ed63922f-f88c-4dab-9e8d-9670db8ee7b2@snapshot-6be98fec-bfee-4550-99a6-358e2b8b6609 --name client.libvirt
rbd image 'volume-ed63922f-f88c-4dab-9e8d-9670db8ee7b2':
size 61440 MB in 15360 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.86c4238e1f29
format: 2
features: layering
flags:
protected: True
這里可以看到volume-7398e12b-333b-4610-b75c-e237d164781d
實際上是通過volumes/volume-ed63922f-f88c-4dab-9e8d-9670db8ee7b2@snapshot-6be98fec-bfee-4550-99a6-358e2b8b6609
這個快照克隆而來。cinder-volumes
服務在處理快照前已經對其做了protected保護,那么按照這個特性,之后創建虛擬機或者批量創建虛擬機就可以通過這個快照克隆而來。我這里測試了下確實是可行的。
通過快照批量創建虛擬機
$ nova boot --flavor ff29c42b-754d-4230-9e1f-9bdaba800f5e --block-device source=snapshot,dest=volume,id=6be98fec-bfee-4550-99a6-358e2b8b6609,bootindex=0 --security-groups default --nic net-id=163df3b0-13f2-4f2e-8401-e82088e8dc07 --min-count 2 --max-count 3 test
對于--min-count
和--max-count
的處理邏輯是,在批量創建虛擬機是,nova會根據--max-count
的數量計算需要的配額,如果超過配額就按照--min-count
的數量計算分配,如果仍然超過配額就返回失敗。
創建成功后就如下
#nova 虛擬機狀態
+--------------------------------------+--------+--------+------------+-------------+--------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+--------+--------+------------+-------------+--------------------------+
| 66e5bf31-6ad0-496b-9dbc-77a1c2b8fa32 | test-1 | ACTIVE | - | Running | privite01=192.168.17.246 |
| 6ebad2a4-b844-4662-9b1b-862a726c1873 | test-2 | ACTIVE | - | Running | privite01=192.168.17.248 |
| 5f5d7088-cfb3-4e8a-9b38-7e128b4fc8eb | test-3 | ACTIVE | - | Running | privite01=192.168.17.249 |
+--------------------------------------+--------+--------+------------+-------------+--------------------------+
#cinder 數據卷狀態
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
| 77835304-1324-4063-9d25-211065fe406f | in-use | | 60 | - | true | 5f5d7088-cfb3-4e8a-9b38-7e128b4fc8eb |
| 85f66862-1fad-4634-b96f-cdb51558c422 | in-use | | 60 | - | true | 6ebad2a4-b844-4662-9b1b-862a726c1873 |
| b457befb-490a-402e-8b6f-215945ab6248 | in-use | | 60 | - | true | 66e5bf31-6ad0-496b-9dbc-77a1c2b8fa32 |
| ed63922f-f88c-4dab-9e8d-9670db8ee7b2 | available | jcloud.v1.0 | 60 | - | true | |
+--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+
#ceph卷信息
$ rbd ls volumes --name client.libvirt
volume-77835304-1324-4063-9d25-211065fe406f
volume-85f66862-1fad-4634-b96f-cdb51558c422
volume-b457befb-490a-402e-8b6f-215945ab6248
volume-ed63922f-f88c-4dab-9e8d-9670db8ee7b2
本文參考:
http://docs.ceph.org.cn/
http://www.cnblogs.com/sammyliu/p/4462718.html