mxnet分布式1

mxnet分布式1

可能的阻塞原因

啟動分布式的時候一開始經常程序阻塞住,自以為一切都按照官方的操作了,從表面現象看發射機在啟動了launcher.py進程后,shell停住,這個是最讓人頭疼的,這種情況首先確保:

  1. 每一臺機器上的環境一樣,包括代碼路徑,python環境
  2. 要啟動的進程是否已經存在,如果已經存在,先殺死它們
  3. 防火墻是否已經關閉
  4. 兩臺機器是否能免密ssh了

啟動方式

  1. 通過官方提供的launcher.py啟動

    參考:https://github.com/apache/incubator-mxnet/tree/master/example/image-classification

  2. 為了看明白其中的過程,看一種別的啟動方式

首先啟動scheduler,scheduler進程會阻塞等待,再啟動兩個server,每個server都指定了PS的IP地址,最后啟動兩個worker,整個分布式程序開始啟動運行,worker的shell動起來

針對mnist

sch---export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_ROLE=scheduler; export DMLC_PS_ROOT_PORT=9001; export DMLC_NUM_WORKER=2; export DMLC_NUM_SERVER=2; 
cd /path/to;
python train_mnist.py --kv-store dist_sync

ps1---export DMLC_SERVER_ID=0; export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_ROLE=server; export DMLC_PS_ROOT_PORT=9001; export DMLC_NUM_WORKER=2; export DMLC_NUM_SERVER=2; 
cd /path/to;
python train_mnist.py --kv-store dist_sync

ps2---export DMLC_SERVER_ID=1; export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_ROLE=server; export DMLC_PS_ROOT_PORT=9001; export DMLC_NUM_WORKER=2; export DMLC_NUM_SERVER=2 
cd /path/to; 
python train_mnist.py --kv-store dist_sync

wk1---export DMLC_WORKER_ID=0; export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_ROLE=worker; export DMLC_PS_ROOT_PORT=9001; export DMLC_NUM_WORKER=2; export DMLC_NUM_SERVER=2 
cd /path/to;
python train_mnist.py --kv-store dist_sync

wk2---export DMLC_WORKER_ID=2; export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_ROLE=worker; export DMLC_PS_ROOT_PORT=9001; export DMLC_NUM_WORKER=2; export DMLC_NUM_SERVER=2 
cd /path/to;
python train_mnist.py --kv-store dist_sync

啟動過程分析

在x.x.x.x/x兩臺機器上做實驗

啟動腳本:

python ../../tools/launch.py -n 2 --launcher ssh -H hosts `which python` train_mnist.py --kv-store=dist_sync

啟動后兩臺機器上的啟動的進程分析

  • 發射機

/home/xxx/anaconda2/envs/ps_lite/bin/python train_mnist.py --kv-store=dist_sync這條命令執行了3次,第一次是parameter server啟動的scheduler進程,由trackerpserver = PSTracker(hostIP=hostIP, cmd=pscmd, envs=envs)代碼啟動,scheduler進程由PSTtacker的構造函數啟動,另外兩個是由發射機ssh啟動的server和worker進程,以上所有進程啟動都是用異步線程啟動

ssh -o StrictHostKeyChecking=no x.x.x.x -p 22 export LD_LIBRARY_PATH=.::/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/home/xxx/xxx-workspace/cuda-8.0-cudnn-6.0/lib64; export DMLC_ROLE=server; export DMLC_PS_ROOT_PORT=9091; export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_NUM_SERVER=2; export DMLC_NUM_WORKER=2; cd /path/to/example/image-classification/; `which python` train_mnist.py --kv-store=dist_sync

這個進程起了四次,分別是向兩個worker和兩個server發送ssh進程,IP從hosts文件讀取,PS都是x.x.x.x這臺機器

bash -c export LD_LIBRARY_PATH=.::/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/home/xxx/xxx-workspace/cuda-8.0-cudnn-6.0/lib64; export DMLC_ROLE=server; export DMLC_PS_ROOT_PORT=9092; export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_NUM_SERVER=2; export DMLC_NUM_WORKER=2; cd /path/to/example/image-classification/; `which python` train_mnist.py --kv-store=dist_sync

這個進程起了2次,接收到發射機發送的兩次請求,分別啟動server進程和worker進程

  • worker節點

/home/xxx/anaconda2/envs/ps_lite/bin/python train_mnist.py --kv-store=dist_sync這條命令執行了2次

bash -c export LD_LIBRARY_PATH=.::/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/home/xxx/xxx-workspace/cuda-8.0-cudnn-6.0/lib64; export DMLC_ROLE=worker; export DMLC_PS_ROOT_PORT=9092; export DMLC_PS_ROOT_URI=x.x.x.x; export DMLC_NUM_SERVER=2; export DMLC_NUM_WORKER=2; cd /path/to/example/image-classification/; `which python` train_mnist.py --kv-store=dist_sync

這個進程啟動了2次, 接收到發射機發送的兩次請求,分別啟動server進程和worker進程

啟動后的參數

Namespace(archives=[], auto_file_cache=True, cluster='ssh', command=['`which', 'python`', 'train_mnist.py', '--kv-store=dist_sync'], env=[], files=[], hdfs_tempdir='/tmp', host_file='hosts', host_ip=None, jobname=None, kube_namespace='default', kube_server_image='mxnet/python', kube_server_template=None, kube_worker_image='mxnet/python', kube_worker_template=None, log_file=None, log_level='INFO', mesos_master=None, num_servers=2, num_workers=2, queue='default', server_cores=1, server_memory='1g', server_memory_mb=1024, sge_log_dir=None, ship_libcxx=None, slurm_server_nodes=None, slurm_worker_nodes=None, sync_dst_dir='None', worker_cores=1, worker_memory='1g', worker_memory_mb=1024, yarn_app_classpath=None, yarn_app_dir='/path/to/tools/../dmlc-core/tracker/dmlc_tracker/../yarn')

上面一堆參數中只有num_workers, num_servers,cluseter,host_file,sync_dst_dir,command是從外部給出,其他的參數從

try:
    from dmlc_tracker import opts
except ImportError:
    print("Can't load dmlc_tracker package.  Perhaps you need to run")
    print("    git submodule update --init --recursive")
    raise
dmlc_opts = opts.get_opts(args)

中最后一行加載進來, opt.py中定義了很多參數parser

ssh.py->submit(args)->tracker.py:submit()->fun_submit->ssh.py:submit():ssh_submit()

hosts對象包裝了hosts文件的IP地址和對應的端口

在ssh.py的方法ssh_submit()方法中,for語句中依次從hosts文件中拿到IP啟動server和worker

代碼分析

只試過ssh的啟動方式,目前只看到python層的代碼,看到啟動了各自的進程,主要就四個類,launcher調用ssh.py,ssh.py調用tracker.py依次啟動scheduler,server,worker進程


image.png
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容