1c8b: 回顧與概述
這篇文章詳細(xì)介紹了Tensorflow的安裝和校驗(yàn)安裝是否成功的教程,涵蓋了在Ubuntu 16.04
環(huán)境下GPU支持安裝
和非GPU支持的安裝
以及校驗(yàn)
。
系統(tǒng)概覽:
- Ubuntu 16.04 64位
- NVIDIA GTX 770M
內(nèi)容概覽:
- 根據(jù)NVIDIA官方文檔安裝CUDA-Toolkit
- 校驗(yàn)CUDA安裝是否成功
- 根據(jù)Tensorflow官方文檔安裝Tensorflow(GPU支持)
- 校驗(yàn)Tensorflow安裝是否成功
原版文檔很長(zhǎng),這篇文章給使用Ubuntu的朋友們提供一下便利!
1c8c: Tensorflow入門資源推薦
我的第一份Tensorflow入門,給大家介紹一下,是Youtube里周莫煩的Tensorflow基礎(chǔ)教程
,翻墻點(diǎn)擊<a target="_blank">這里</a>帶你去看!
下面好評(píng)很多,很基礎(chǔ),現(xiàn)在20集的樣子,大家可以訂閱,并且希望他能持續(xù)更新。
1c8d: Tensorflow安裝
首先請(qǐng)翻墻;
點(diǎn)擊進(jìn)入Tensofrflow官網(wǎng)Linux安裝頁(yè)。進(jìn)入頁(yè)面之后就會(huì)看到安裝選擇,可以安裝非GPU支持的TF(Tensorflow以下簡(jiǎn)稱TF)和GPU支持的TF,我們這邊選擇安裝GPU支持的TF;
-
檢查系統(tǒng)軟件硬件是否符合NVIDIA要求;
完整的安裝前檢查如下:
- 檢查系統(tǒng)GPU是否處于CUDA支持列表
通過(guò) lspci | grep -i nvidia 來(lái)查看系統(tǒng)GPU型號(hào);如果沒(méi)有輸入,請(qǐng)先運(yùn)行update-pciids,然后再次運(yùn)行上一個(gè)命令;
并且到 CUDA支持的GPU列表 查看系統(tǒng)GPU是否處于支持列表;
- 檢查當(dāng)前Linux版本處于CUDA支持列表
通過(guò) uname -m && cat /etc/*release 來(lái)查看Linux版本;
</br>
<center>CUDA-8支持的Linux版本
</br>
Cuda-8-Supported Linux Dists
- 檢查系統(tǒng)安裝了gcc
通過(guò) gcc --version 來(lái)查看系統(tǒng)是否安裝gcc;如果報(bào)錯(cuò),請(qǐng)安裝相應(yīng)的開(kāi)發(fā)工具包;
- 檢查系統(tǒng)是否安裝內(nèi)核頭文件,以及必要的開(kāi)發(fā)工具包;
通過(guò) sudo apt-get install linux-headers-$(uname -r) 安裝;
其他Linux發(fā)行版本,安裝方法詳見(jiàn)CUDA安裝文檔 第2.4章節(jié);
- 如果上述條件符合,轉(zhuǎn)到第4步,否則,轉(zhuǎn)到第8步安裝
非GPU支持的Tensorflow
;
需要安裝CUDA-TOOLKIT來(lái)支持GPU。官方提供了NVIDIA的官方文檔;
下載NVIDIA最新的CUDA-TOOLKIT;在頁(yè)面最下方,以此點(diǎn)擊
Linux->x86_64->Ubuntu->16.04->deb(local)
; 之后點(diǎn)擊Download
開(kāi)始下載;-
下載完成之后,在終端執(zhí)行以下命令,安裝CUDA:
sudo dpkg -i cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64.deb sudo apt-get update sudo apt-get install cuda
-
安裝CUDA依賴庫(kù):
sudo apt-get install libcupti-dev
-
通過(guò)官方推薦的
virtualenv
方式來(lái)安裝TF
:完整安裝步驟如下:
-
安裝pip,virtualenv,以及python開(kāi)發(fā)包
sudo apt-get install python-pip python-dev python-virtualenv
-
創(chuàng)建virtualenv環(huán)境
virtualenv --system-site-packages <tensorflow> 上述命令講創(chuàng)建系統(tǒng)默認(rèn)python版本的虛擬環(huán)境 如果需要指定python版本,務(wù)必添加--python選項(xiàng) virtualenv --system-site-packages --python=<path-to-python-executable> <tensorflow>
-
激活虛擬環(huán)境
$ source ~/tensorflow/bin/activate # bash, sh, ksh, or zsh $ source ~/tensorflow/bin/activate.csh # csh or tcsh
使用如下相應(yīng)的命令安裝
TF
(tensorflow)$ pip install --upgrade tensorflow # for Python 2.7 (tensorflow)$ pip3 install --upgrade tensorflow # for Python 3.n (tensorflow)$ pip install --upgrade tensorflow-gpu # for Python 2.7 and GPU (tensorflow)$ pip3 install --upgrade tensorflow-gpu # for Python 3.n and GPU
- 如果上面一步失敗,可能由于使用的pip版本低于了8.1,使用如下命令安裝
pip install --upgrade pip # 升級(jí)pip,然后重試上一步驟 如果有權(quán)限錯(cuò)誤,請(qǐng)使用 sudo -H pip install --upgrade pip # 升級(jí)pip,然后重試上一步驟
-
至此,CUDA和TF都安裝完成;
1c8e: 校驗(yàn)CUDA和TF安裝
-
校驗(yàn)CUDA安裝
1) 重啟一下系統(tǒng),讓NVIDIA GPU加載剛剛安裝的驅(qū)動(dòng),重啟完成之后運(yùn)行cat /proc/driver/nvidia/version 如果有如下顯示,說(shuō)明GPU驅(qū)動(dòng)加載成功: NVRM version: NVIDIA UNIX x86_64 Kernel Module 375.26 Thu Dec 8 18:36:43 PST 2016 GCC version: gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4)
- 配置環(huán)境變量
# cuda env export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}} export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64/${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} export LPATH=/usr/lib/nvidia-375:$LPATH export LIBRARY_PATH=/usr/lib/nvidia-375:$LIBRARY_PATH export CUDA_HOME=/usr/local/cuda-8.0
務(wù)必配置這些環(huán)境變量,否則在接下來(lái)編譯sample的時(shí)候會(huì)遇到如下錯(cuò)誤:
Makefile:346: recipe for target 'cudaDecodeGL' failed
- 安裝CUDA樣例程序
cuda-install-samples-8.0.sh <dir> 該命令已經(jīng)在系統(tǒng)環(huán)境變量中,直接使用,dir為自定義目錄; 執(zhí)行完該命令之后,如果成功,會(huì)在dir中生成一個(gè) NVIDIA_CUDA-8.0_Samples 目錄
- 編譯樣例程序,校驗(yàn)CUDA安裝
編譯之前首先保證第2)步中的環(huán)境變量設(shè)置無(wú)誤,并且第1)步中,GPU驅(qū)動(dòng)版本顯示正常 進(jìn)入 NVIDIA_CUDA-8.0_Samples 目錄,執(zhí)行 make 編譯成功之后輸入如下: /usr/local/cuda-8.0/bin/nvcc -ccbin g++ -I../../common/inc -I../common/UtilNPP -I../common/FreeImage/include -m64 -gencode arch=compute_20,code=compute_20 -o jpegNPP.o -c jpegNPP.cpp nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). /usr/local/cuda-8.0/bin/nvcc -ccbin g++ -m64 -gencode arch=compute_20,code=compute_20 -o jpegNPP jpegNPP.o -L../common/FreeImage/lib -L../common/FreeImage/lib/linux -L../common/FreeImage/lib/linux/x86_64 -lnppi -lnppc -lfreeimage nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). mkdir -p ../../bin/x86_64/linux/release cp jpegNPP ../../bin/x86_64/linux/release make[1]: Leaving directory '/home/yuanzimiao/Downloads/NVIDIA_CUDA-8.0_Samples/7_CUDALibraries/jpegNPP' Finished building CUDA samples 注:CUDA版本可以使用 nvcc -V 來(lái)查看
- 運(yùn)行樣例程序
進(jìn)入 bin 目錄 運(yùn)行 ./deviceQuery 如果CUDA安裝及配置無(wú)誤,輸出如下: ./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "GeForce GTX 770M" CUDA Driver Version / Runtime Version 8.0 / 8.0 CUDA Capability Major/Minor version number: 3.0 Total amount of global memory: 3017 MBytes (3163357184 bytes) ( 5) Multiprocessors, (192) CUDA Cores/MP: 960 CUDA Cores GPU Max Clock rate: 797 MHz (0.80 GHz) Memory Clock rate: 2004 Mhz Memory Bus Width: 192-bit L2 Cache Size: 393216 bytes Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096) Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 1 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 770M Result = PASS Result = Pass,校驗(yàn)通過(guò) 運(yùn)行: ./bandwidthTest 來(lái)測(cè)試系統(tǒng)和CUDA組建的通信無(wú)誤,正常結(jié)果輸出如下: [CUDA Bandwidth Test] - Starting... Running on... Device 0: GeForce GTX 770M Quick Mode Host to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 11534.4 Device to Host Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 11768.0 Device to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 72735.8 Result = PASS NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled. 輸出Result = Pass 校驗(yàn)通過(guò)
注:運(yùn)行期間如果遇到錯(cuò)誤,可能是系統(tǒng)SELINUX狀態(tài)為開(kāi)啟,或者NVIDIA必要文件缺失,詳見(jiàn)官方文檔 第6.2.2.3
章節(jié)
至此,CUDA安裝校驗(yàn)完成
-
校驗(yàn)Tensorflow安裝
- 激活虛擬環(huán)境,然后運(yùn)行
$ python >>> import tensorflow as tf >>> hello = tf.constant('Hello, TensorFlow!') >>> sess = tf.Session() >>> print(sess.run(hello)) 如果輸出 Hello, TensorFlow! 說(shuō)明TF安裝正常
Edit 1
帶GPU支持TF在import tensorflow
的時(shí)候出現(xiàn)了錯(cuò)誤日志:
>>> import tensorflow as t
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:126] Couldn't open CUDA library libcudnn.so.5. LD_LIBRARY_PATH: /usr/local/cuda-8.0/lib64/
I tensorflow/stream_executor/cuda/cuda_dnn.cc:3517] Unable to load cuDNN DSO
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
Couldn't open CUDA library libcudnn.so.5. LD_LIBRARY_PATH: /usr/local/cuda-8.0/lib64/
Google得知是因?yàn)?code>CUDNN包沒(méi)有安裝;
點(diǎn)擊這里進(jìn)入下載頁(yè)面;
下載CUDNN
需要注冊(cè)NVIDIA帳號(hào),有些問(wèn)題要答,隨便勾選,隨便寫點(diǎn)什么就可以了;
完成注冊(cè)之后,就可以下載了;
下載完成之后,運(yùn)行
sudo tar -xvf cudnn-8.0-linux-x64-v5.1-rc.tgz -C /usr/local
這里假設(shè)/usr/local是cuda的安裝目錄
然后再次import:
>>> import tensorflow as t
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 770M
major: 3 minor: 0 memoryClockRate (GHz) 0.797
pciBusID 0000:01:00.0
Total memory: 2.95GiB
Free memory: 2.52GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 770M, pci bus id: 0000:01:00.0)
Those are simply warnings. They are just informing you if you build TensorFlow from source it can be faster on your machine. Those instructions are not enabled by default on the builds available I think to be compatible with more CPUs as possible.
If you have any other doubts regarding this please feel free to ask, otherwise this can be closed.
所有的庫(kù)都加載成功了;上面輸出的帶W
的警告,查詢后是因?yàn)闆](méi)有通過(guò)源代碼編譯TF,有些CPU的參數(shù)沒(méi)有開(kāi)啟,因此CPU不支持一系列協(xié)議。不過(guò),這只會(huì)影響CPU的計(jì)算速度,并不影響GPU的計(jì)算。