概括來說,qemu和KVM在內存管理上的關系就是:在虛擬機啟動時,qemu在qemu進程地址空間申請內存,即內存的申請是在用戶空間完成的。通過kvm提供的API,把地址信息注冊到KVM中,這樣KVM中維護有虛擬機相關的slot,這些slot構成了一個完整的虛擬機物理地址空間。slot中記錄了其對應的HVA,頁面數、起始GPA等,利用它可以把一個GPA轉化成HVA,這正是KVM維護EPT的技術核心。整個內存虛擬化可以分為兩部分:qemu部分和kvm部分。qemu完成內存的申請,kvm實現內存的管理。
- qemu中地址空間分兩部分,兩個全局變量system_memory和system_IO,其中system_memory是所有memory_region的父object,他們只負責管理內存。
- 在KVM中,也有兩個全局變量address_space_memory和address_space_memory_IO,與qemu中的memory_region對應,只有將HVA和GPA的對應關系注冊到KVM模塊的memslot,才可以生效成為EPT。
在qemu 2.9的前端virtio和dpdk17.05的后端vhost-user構成的虛擬隊列中,會率先通過socket建立連接,將qemu中virtio的內存布局傳給vhost,vhost收到包(該消息機制有自己的協議,暫稱為msg)后,分析其中的信息,這里面通信包含一套自己寫的協議。包含以下內容,均是在剛建立連接時候傳遞的:
static const char *vhost_message_str[VHOST_USER_MAX] = {
[VHOST_USER_NONE] = "VHOST_USER_NONE",
[VHOST_USER_GET_FEATURES] = "VHOST_USER_GET_FEATURES",
[VHOST_USER_SET_FEATURES] = "VHOST_USER_SET_FEATURES",
[VHOST_USER_SET_OWNER] = "VHOST_USER_SET_OWNER",
[VHOST_USER_RESET_OWNER] = "VHOST_USER_RESET_OWNER",
[VHOST_USER_SET_MEM_TABLE] = "VHOST_USER_SET_MEM_TABLE",
[VHOST_USER_SET_LOG_BASE] = "VHOST_USER_SET_LOG_BASE",
[VHOST_USER_SET_LOG_FD] = "VHOST_USER_SET_LOG_FD",
[VHOST_USER_SET_VRING_NUM] = "VHOST_USER_SET_VRING_NUM",
[VHOST_USER_SET_VRING_ADDR] = "VHOST_USER_SET_VRING_ADDR",
[VHOST_USER_SET_VRING_BASE] = "VHOST_USER_SET_VRING_BASE",
[VHOST_USER_GET_VRING_BASE] = "VHOST_USER_GET_VRING_BASE",
[VHOST_USER_SET_VRING_KICK] = "VHOST_USER_SET_VRING_KICK",
[VHOST_USER_SET_VRING_CALL] = "VHOST_USER_SET_VRING_CALL",
[VHOST_USER_SET_VRING_ERR] = "VHOST_USER_SET_VRING_ERR",
[VHOST_USER_GET_PROTOCOL_FEATURES] = "VHOST_USER_GET_PROTOCOL_FEATURES",
[VHOST_USER_SET_PROTOCOL_FEATURES] = "VHOST_USER_SET_PROTOCOL_FEATURES",
[VHOST_USER_GET_QUEUE_NUM] = "VHOST_USER_GET_QUEUE_NUM",
[VHOST_USER_SET_VRING_ENABLE] = "VHOST_USER_SET_VRING_ENABLE",
[VHOST_USER_SEND_RARP] = "VHOST_USER_SEND_RARP",
[VHOST_USER_NET_SET_MTU] = "VHOST_USER_NET_SET_MTU",
};
其中我們最關心的就是vhost_user_set_mem_table:
static int
vhost_user_set_mem_table(struct virtio_net *dev, struct VhostUserMsg *pmsg)
{
...
for (i = 0; i < memory.nregions; i++) {
fd = pmsg->fds[i];
reg = &dev->mem->regions[i];
reg->guest_phys_addr = memory.regions[i].guest_phys_addr;
reg->guest_user_addr = memory.regions[i].userspace_addr;
reg->size = memory.regions[i].memory_size;
reg->fd = fd;
mmap_offset = memory.regions[i].mmap_offset;
mmap_size = reg->size + mmap_offset;
/* mmap() without flag of MAP_ANONYMOUS, should be called
* with length argument aligned with hugepagesz at older
* longterm version Linux, like 2.6.32 and 3.2.72, or
* mmap() will fail with EINVAL.
*
* to avoid failure, make sure in caller to keep length
* aligned.
*/
alignment = get_blk_size(fd);
if (alignment == (uint64_t)-1) {
RTE_LOG(ERR, VHOST_CONFIG,
"couldn't get hugepage size through fstat\n");
goto err_mmap;
}
mmap_size = RTE_ALIGN_CEIL(mmap_size, alignment);
mmap_addr = mmap(NULL, mmap_size, PROT_READ | PROT_WRITE,
MAP_SHARED | MAP_POPULATE, fd, 0);
//對每個region調用mmap映射共享內存
if (mmap_addr == MAP_FAILED) {
RTE_LOG(ERR, VHOST_CONFIG,
"mmap region %u failed.\n", i);
goto err_mmap;
}
...
return 0;
err_mmap:
free_mem_region(dev);
rte_free(dev->mem);
dev->mem = NULL;
return -1;
}
另外,我們在實際運行系統的過程中發現,qemu的內存布局和vhost端的內存布局,雖是通過共享內存建立的,但是既不是一整塊內存映射,也不是通過零碎的region一小塊一小塊的映射。它們的內存布局如下:
在vhost這邊只有兩塊region,而且像是將前端的內存region做了一個聚合得到的。回歸代碼,發現消息傳遞之前,傳遞的并非是memory_region變量,而是memory_region_section,在qemu的vhost_set_memory函數中,有這樣一個操作:
if (add) {
/* Add given mapping, merging adjacent regions if any */
vhost_dev_assign_memory(dev, start_addr, size, (uintptr_t)ram);
} else {
/* Remove old mapping for this memory, if any. */
vhost_dev_unassign_memory(dev, start_addr, size);
}
將毗鄰的memory_region合并了,這樣就解釋的通了。因為memory_region是一個樹狀結構,且有包含關系在里面,所以如果一個個傳遞,vhost里面用for循環進行映射到自己地址空間,效率低下,而且大多數內存vhost用不到,沒有必要這么細分。