Gunicorn - 如何綁定worker到不同的GPU卡

1. 概述

在部署推理服務時,對于多卡服務器的使用,需要gunicorn能夠將不同的worker分配到不同的gpu卡上。
另外,收到多個request時,需要平均分配到不同的worker,這樣可以最大限度利用硬件資源。

這里采用的worker-class是sync,主要為了解決request平均分配的問題。采用gevent測試結果不理想。
如何將worker分配到不同的gpu卡上呢,需要通過server hooks來解決。

Dockerfile的默認配置:

ENV GUNICORN_CMD_ARGS="-b 0.0.0.0:9090 -c gunicorn_conf.py -w 4 --backlog 4 --timeout 600"
ENTRYPOINT ["gunicorn", "wsgi:app"]

參考

相關Issue

2. 源碼奉上

In pre_fork and child_exit, maintain the worker configuration data, such as the mapping of worker and cuda index.
In post_fork hook function, give os.environ a new key/value, and the worker could get the value from os.environ.
I have tried this method above with sync worker type, and it works. Notice that, the worker is forked from master, and the mapping should be maintained in master process.
If have any other questions, the source code may be help you.

  • gunicorn_conf.py
import os

USE_GPU = int(os.environ.get("USE_GPU", 1))
CUDA_NUMBER = int(os.environ.get("CUDA_NUMBER", 1))
WORKER_ENV = {}

if USE_GPU and CUDA_NUMBER > 1:
    def get_worker_env():
        counter = {f'cuda:{i}': 0 for i in range(CUDA_NUMBER)}
        for k in WORKER_ENV.keys():
            assert WORKER_ENV[k] in counter
            counter[WORKER_ENV[k]] += 1
        
        min_count, min_cuda = 1024, 'cuda:0'
        for cuda, count in counter.items():
            if count < min_count:
                min_cuda = cuda
                min_count = count
        return min_cuda

    def get_worker_id(worker):
        return f'WORKER-{worker.age}'

    # running in master process
    def pre_fork(server, worker):
        _id = get_worker_id(worker)
        WORKER_ENV[_id] = get_worker_env()
        server.log.info(f'set master env {_id}: {WORKER_ENV[_id]}')

    # running in worker process, and environment is in process scope, not os scope
    def post_fork(server, worker):
        _id = get_worker_id(worker)
        os.environ['CUDA_INDEX'] = WORKER_ENV[_id]
        server.log.info(f'set worker (age: {worker.age}, pid {worker.pid}) env CUDA_INDEX: {WORKER_ENV[_id]}')

    # running in master process
    def child_exit(server, worker):
        _id = get_worker_id(worker)
        server.log.info(f'remove worker env {_id}: {WORKER_ENV[_id]}')
        del WORKER_ENV[_id]

3. 推理代碼

在推理代碼中,獲取到CUDA_INDEX環境變量,然后調用nn.ModuleTensorto方法,將內存數據copy到顯存中,例如

data: Tensor
data.to('cuda:0')
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容