TensorFlow 安裝指導

1.TensorFlow 簡介

TensorFlow™ 是一個使用數據流圖進行數值計算的開源軟件庫。圖中的節點代表數學運算, 而圖中的邊則代表在這些節點之間傳遞的多維數組(張量)。這種靈活的架構可讓您使用一個 API 將計算工作部署到桌面設備、服務器或者移動設備中的一個或多個 CPU 或 GPU。 TensorFlow 最初是由 Google 機器智能研究部門的 Google Brain 團隊中的研究人員和工程師開發的,用於進行機器學習和深度神經網絡研究, 但它是一個非常基礎的系統,因此也可以應用於衆多其他領域。

2.測試環境

本次測試在CentOS 7下進行:

[root@localhost ~]# uname -a
Linux localhost.localdomain 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

3. YUM源配置

yum 主要功能是更方便的添加/刪除/更新RPM包,自動解決包的依賴性問題,便於管理大量系統的更新問題。CentOS本身的yum源配置文件可直接使用,也可以手動配置阿里雲開源鏡像站或網易開源鏡像站的yum源。

3.1 EPEL源

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

如果使用CentOS 7自帶的yum源,可以直接使用以下命令安裝EPEL源:

yum install epel-release

也可使用以下方法進行安裝:

  • RHEL/CentOS 7:
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
  • on RHEL 7 it is recommended to also enable the optional and extras repositories since EPEL packages may depend on packages from these repositories:
subscription-manager repos --enable "rhel-*-optional-rpms" --enable "rhel-*-extras-rpms"
  • RHEL/CentOS 6:
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm

4.GPU支持(可選)

==備註:本次測試使用的是虛擬機,因無GPU相關設備,CUDA、CUDNN和TensorFlow-gpu未進行測試。==

如果需要安裝 GPU 支持的 TensorFlow, 你必須確保系統裏安裝了正確的 CUDA sdk 和 CUDNN 版本。

4.1 安裝CUDA

CUDA(Compute Unified Device Architecture),是顯卡廠商NVIDIA推出的運算平臺。 CUDA™是一種由NVIDIA推出的通用並行計算架構,該架構使GPU能夠解決複雜的計算問題。 它包含了CUDA指令集架構(ISA)以及GPU內部的並行計算引擎。 開發人員現在可以使用C語言來爲CUDA™架構編寫程序,C語言是應用最廣泛的一種高級編程語言。所編寫出的程序於是就可以在支持CUDA™的處理器上以超高性能運行。CUDA3.0已經開始支持C++和FORTRAN。

CUDA安裝:

wget http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-repo-rhel7-7.0-28.x86_64.rpm
rpm -iv cuda-repo-rhel7-7.0-28.x86_64.rpm
yum search cuda
yum install cuda

你還需要設置LD_LIBRARY_PATHCUDA_HOME環境變量. 可以考慮將下面的命令添加到~/.bash_profile文件中, 這樣每次登陸後自動生效。

查看CUDA安裝路徑:

[root@localhost tensorflow]# find / -name cuda
/root/caffe-master/.build_release/cuda
/usr/local/cuda-7.0/targets/x86_64-linux/include/thrust/system/cuda
/usr/local/cuda

本次測試CUDA安裝目錄爲 /usr/local/cuda,則添加環境變量的內容爲:

export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"
export CUDA_HOME=/usr/local/cuda

4.2 GPU 驅動下載安裝

登陸英偉達官網,下載操作系統對應版本的最新顯卡驅動並安裝。

4.3 CUDNNv3下載安裝

NVIDIA cuDNN是用於深度神經網絡的GPU加速庫。它強調性能、易用性和低內存開銷。NVIDIA cuDNN可以集成到更高級別的機器學習框架中,如加州大學伯克利分校的流行CAFFE軟件;簡單的插入式設計可以讓開發人員專注於設計和實現神經網絡模型,而不是調整性能,同時還可以在GPU上實現高性能現代並行計算。

tar -xvf cudnn-7-0.tgz
cp cuda/include/cudnn.h /usr/local/cuda/include/
cp cuda/lib64/libcudnn* /usr/local/cuda/lib64/

5. TensorFlow下載與安裝

5.1 安裝需求

TensorFlow Python API 目前支持Python 2.7 和python 3.3 以上版本。

pip版本最好在8.1及以上或pip3,否則安裝tensorflow時會報錯:

pip install --upgrade tensorflow
Could not find any downloads that satisfy the requirement tensorflow

支持GPU 運算的版本(僅限Linux) 需要Cuda Toolkit 7.0 和CUDNN 6.5 V2. 具體請

參考Cuda安裝。

5.2 安裝總述

TensorFlow 支持通過以下不同的方式安裝:

  • Pip 安裝: 在你的機器上安裝TensorFlow,可能會同時更新之前安裝的Python 包,並且影響到你機器當前可運行的Python 程序。

  • VirtualEnv 安裝:在一個獨立的路徑下安裝TensorFlow,不會影響到你機器當前運行的Python 程序。

  • Docker 安裝: 在一個獨立的Docker 容器中安裝TensorFlow,並且不會影響到你機器上的任何其他程序。
  • Anaconda 安裝:Anaconda是一個集成許多第三方科學計算庫的 Python 科學計算環境,Anaconda 使用 conda 作爲自己的包管理工具,同時具有自己的計算環境,類似 Virtualenv。和 Virtualenv 一樣,不同 Python 工程需要的依賴包,conda 將他們存儲在不同的地方。 TensorFlow 上安裝的 Anaconda 不會對之前安裝的 Python 包進行覆蓋。
  • 源碼編譯安裝

5.3 下載與安裝

本文只介紹Pip安裝和基於Virtualenv的安裝方式。

5.3.1 使用pip安裝TensorFlow

  • Pip安裝

查看python版本,TensorFlow Python API 目前支持Python 2.7 和python 3.3 以上版本。

#查看當前系統python版本
[root@localhost ~]# python -V
Python 2.7.5

Pip 是一個用於安裝和管理Python 軟件包的管理系統。如果pip尚未被安裝,請使用以下代碼先安裝pip(如果你使用的是Python 3 請安裝pip3 ):

yum install python-pip python-devel   # for Python 2.7
yum install python34-pip python34-devel # for Python 3.4
  • 國內PyPI源配置

使用pip進行程序安裝時,Base URL of Python Package Index 默認使用的是<https://pypi.python.org/simple>(國外網站),使用pip進行程序安裝時可能會因網絡原因導致報錯,這裏將PyPI源替換爲國內清華大學的PyPI源(也可以使用163或阿里雲的PyPI源,163的源更新不夠及時,可能無法安裝最新版本的程序,阿里雲的源使用過程中會出現程序無法安裝的問題)。

[root@localhost ~]# mkdir ~/.pip/
[root@localhost ~]# vim ~/.pip/pip.conf 
[global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple/
[install]
trusted-host=pypi.tuna.tsinghua.edu.cn

若不配置此文件,想臨時使用國內的PyPI源,可以使用 -i URL 選項:如

pip install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple/ pip
  • pip升級
pip install --upgrade pip # for Python 2.7
pip3 install --upgrade pip # for Python 3.n

如果升級最後出現以下提示:

mark

可以先使用pip安裝matplotlib後,再進行pip升級。
pip install --upgrade matplotlib
  • TensorFlow 安裝
# Linux 64-bit, CPU only, Python 2.7:
pip install --upgrade tensorflow

# Linux 64-bit, GPU enabled, Python 2.7. Requires CUDA toolkit 7.5 and CuDNN v4.
pip install --upgrade tensorflow-gpu

基於Python 3的TensorFlow安裝:

# Linux 64-bit, CPU only, Python 3.4:
pip3 install --upgrade tensorflow

# Linux 64-bit, GPU enabled, Python 3.4. Requires CUDA toolkit 7.5 and CuDNN v4.
pip3 install --upgrade tensorflow-gpu

至此你可以測試安裝是否成功。

5.3.2 基於Virtualenv 安裝TensorFlow

推薦使用 virtualenv 創建一個隔離的容器, 來安裝 TensorFlow,這樣做能使排查安裝問題變得更容易,Virtualenv 是一個管理在不同位置存放和調用Python 項目所需依賴庫的工具,TensorFlow的Virtualenv 安裝不會覆蓋先前已安裝的TensorFlow Python 依賴包,不會影響到你機器當前運行的python程序。

  • 安裝pip 和Virtualenv
# 在 Linux 上,python2.7
yum install python-pip python-devel python-virtualenv
pip install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple/ pip
#python34
yum install python34-pip python34-devel python-virtualenv
pip3 install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple/ pip
  • 建立一個Virtualenv 環境
在~/tensorflow路徑下建立一個Virtualenv 環境:
#創建python2的virtualenv環境
virtualenv --system-site-packages ~/tensorflow
或
virtualenv --system-site-packages --python=/usr/bin/python2.7 ~/tensorflow
#創建python34的virtualenv環境
virtualenv --system-site-packages --python=/usr/bin/python3.4 ~/tensorflow
  • 激活這個Virtualenv 環境,並且在此環境下安裝TensorFlow
激活Virtualenv
source ~/tensorflow/bin/activate  # 如果使用 bash
source ~/tensorflow/bin/activate.csh  # 如果使用 csh
(tensorflow)$  # 終端提示符應該發生變化

在 virtualenv 內, 安裝 TensorFlow:

虛擬環境中未找到如何使用配置文件來定義國內的PyPI源,這裏使用臨時指定PyPI源的方法進行程序安裝與升級。

#python2.7
(tensorflow) # pip install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple/ pip
#python34
(tensorflow) # pip3 install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple/ pip
  • 安裝TensorFlow
# Linux 64-bit, CPU only, Python 2.7:
(tensorflow)$ pip install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple/ tensorflow

# Linux 64-bit, GPU enabled, Python 2.7. Requires CUDA toolkit 7.5 and CuDNN v4.
(tensorflow)$ pip install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple/ tensorflow-gpu
基於Python 3的TensorFlow安裝:
# Linux 64-bit, CPU only, Python 3.4:
(tensorflow)$ pip3 install --upgrade  -i https://pypi.tuna.tsinghua.edu.cn/simple/ tensorflow

# Linux 64-bit, GPU enabled, Python 3.4. Requires CUDA toolkit 7.5 and CuDNN v4.
(tensorflow)$ pip3 install --upgrade  -i https://pypi.tuna.tsinghua.edu.cn/simple/ tensorflow-gpu

至此TensorFlow安裝完成,此時,可以執行測試程序。

安裝完成之後,每次你需要使用TensorFlow 之前必須激活這個Virtualenv 環境, 當您無需使用TensorFlow 時,取消激活該環境。

激活Virtualenv
source ~/tensorflow/bin/activate  # 如果使用 bash
source ~/tensorflow/bin/activate.csh  # 如果使用 csh
(tensorflow)$  # 終端提示符應該發生變化

# 當使用完 TensorFlow
(tensorflow)$ deactivate  # 停用 virtualenv
$  # 你的命令提示符會恢復原樣

6.TensorFlow測試

6.1 測試TensorFlow是否安裝成功

[root@localhost ~]# python
Python 2.7.5 (default, Aug  4 2017, 00:39:18) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> sess.run(hello)
'Hello, TensorFlow!'
>>> a = tf.constant(10)
>>> b = tf.constant(32)
>>> sess.run(a + b)
42
>>> sess.close()
>>>
#使用pip安裝的tensorflow執行命令時可能會出現以下提醒信息,說是compile tensorflow using SSE4.1, SSE4.2, and AVX可以提高CPU性能,這個不影響使用, 這個問題的出現主要是和tensorflow的安裝方式有關係,使用pip安裝就會出現對代碼編譯優化的問題,使得你電腦有SSE4.1等命令,卻無法調用來加速訓練,如果想消除該提醒信息,需要使用編譯源碼方法安裝tensorflow。
>>> sess = tf.Session()
2018-04-09 20:34:10.185254: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-04-09 20:34:10.185305: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-04-09 20:34:10.185320: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.

6.2 TensorFlow mnist測試

備註:TensorFlow mnist測試詳細信息可以參考第七章節中的TensorFlow中文教程

TensorFlow是一個非常強大的用來做大規模數值計算的庫。其所擅長的任務之一就是實現以及訓練深度神經網絡。

當我們開始學習編程的時候,第一件事往往是學習打印"Hello World"。就好比編程入門有Hello World,機器學習入門有MNIST。MNIST是一個入門級的計算機視覺數據集,它包含各種手寫數字圖片,它也包含每一張圖片對應的標籤,告訴我們這個是數字幾。

6.2.1 確認tensorflow安裝目錄

[root@localhost kk]# find / -name tensorflow
/usr/lib/python2.7/site-packages/tensorflow
/usr/lib/python2.7/site-packages/tensorflow/include/tensorflow

6.2.2 mnist測試一

我們將訓練一個機器學習模型用於預測圖片裏面的數字。

[root@localhost]# cd /usr/lib/python2.7/site-packages/tensorflow/examples/tutorials/mnist/
[root@localhost mnist]# ls
__init__.py  __init__.pyc  input_data.py  input_data.pyc  mnist.py  mnist.pyc
[root@localhost mnist]# python
Python 2.7.5 (default, Aug  4 2017, 00:39:18) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow.examples.tutorials.mnist.input_data as input_data
>>> mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting MNIST_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
>>> 

執行以上命令後,會下載測試數據,在MNIST_data文件夾下
[root@localhost mnist]# ls
__init__.py  __init__.pyc  input_data.py  input_data.pyc  MNIST_data  mnist.py  mnist.pyc
[root@localhost mnist]# cd MNIST_data/
[root@localhost MNIST_data]# ls
t10k-images-idx3-ubyte.gz  t10k-labels-idx1-ubyte.gz  train-images-idx3-ubyte.gz  train-labels-idx1-ubyte.gz

如果在執行import xxx input_data 語句時報以下錯誤:

/usr/lib64/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters

原因是h5py 和 numpy 版本衝突,h5py 官方已修復合併到 master 分支,但是還沒發新版,在發版之前可以用降級 numpy 的方法跳過這個問題。降級命令如下:
pip install -U -i  https://pypi.tuna.tsinghua.edu.cn/simple/ numpy==1.13.0

錯誤解決後,可以繼續進行測試。

完整的測試步驟如下:

[root@localhost mnist]# python
Python 2.7.5 (default, Aug  4 2017, 00:39:18) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow.examples.tutorials.mnist.input_data input_data
>>> mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
>>> import tensorflow as tf
>>> x = tf.placeholder(tf.float32, [None, 784])
>>> W = tf.Variable(tf.zeros([784,10]))
>>> b = tf.Variable(tf.zeros([10]))
>>> y = tf.nn.softmax(tf.matmul(x,W) + b)
>>> y_ = tf.placeholder("float", [None,10])
>>> cross_entropy = -tf.reduce_sum(y_*tf.log(y))
>>> train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
>>> init = tf.initialize_all_variables()
#提示新版本一些函數在更新過程發生了改變:
WARNING:tensorflow:From /usr/lib/python2.7/site-packages/tensorflow/python/util/tf_should_use.py:175: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.

>>> sess = tf.Session()
#如果想消除該提醒信息,需要使用編譯源碼方法安裝tensorflow。
2018-04-09 20:38:14.754273: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-04-09 20:38:14.754338: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-04-09 20:38:14.754347: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
>>> sess.run(init)
>>> for i in range(1000):
...   batch_xs, batch_ys = mnist.train.next_batch(100)
...   sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
... 
>>> correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
>>> accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
>>> print (sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
0.915
>>> sess.close()

最終的測試結果爲91.5%,這個結果並不太好。事實上,這個結果是很差的。這是因爲我們僅僅使用了一個非常簡單的模型。不過,做一些小小的改進,我們就可以得到97%的正確率。最好的模型甚至可以獲得超過99.7%的準確率!可以參考mnist測試二。

6.2.2 mnist測試二

在本次測試中,我們將學到構建一個TensorFlow模型的基本步驟,並將通過這些步驟爲MNIST構建一個深度卷積神經網絡。

備註:測試二肯能同樣會出現測試一中的報錯,按照測試一步驟進行處理即可。

測試二測試代碼:

注意執行時不要忘記代碼前的空格,否則可能執行失敗。

import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.placeholder("float", shape=[None, 784])
y_ = tf.placeholder("float", shape=[None, 10])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
sess.run(tf.initialize_all_variables())
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

for i in range(1000):
  batch = mnist.train.next_batch(50)
  train_step.run(feed_dict={x: batch[0], y_: batch[1]})

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

def weight_variable(shape):
  initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)

def bias_variable(shape):
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)

def conv2d(x, W):
  return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

def max_pool_2x2(x):
  return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                        strides=[1, 2, 2, 1], padding='SAME')

W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])

x_image = tf.reshape(x, [-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)

W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])

h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)

W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])

h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])

y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)

cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
sess.run(tf.initialize_all_variables())
for i in range(5000):
  batch = mnist.train.next_batch(50)
  if i%100 == 0:
    train_accuracy = accuracy.eval(feed_dict={
        x:batch[0], y_: batch[1], keep_prob: 1.0})
    print("step %d, training accuracy %g"%(i, train_accuracy))
  train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})

print("test accuracy %g"%accuracy.eval(feed_dict={
    x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))

測試二完整執行步驟:

注意執行時不要忘記代碼前的空格,否則可能執行失敗。

cd /usr/lib/python2.7/site-packages/tensorflow/examples/tutorials/mnist/
[root@localhost mnist]# python
Python 2.7.5 (default, Aug  4 2017, 00:39:18) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import input_data
>>> mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
>>> import tensorflow as tf
>>> sess = tf.InteractiveSession()
>>> x = tf.placeholder("float", shape=[None, 784])
>>> y_ = tf.placeholder("float", shape=[None, 10])
>>> W = tf.Variable(tf.zeros([784,10]))
>>> b = tf.Variable(tf.zeros([10]))
>>> sess.run(tf.initialize_all_variables())
WARNING:tensorflow:From /usr/local/python3.6/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py:118: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
>>> y = tf.nn.softmax(tf.matmul(x,W) + b)
>>> cross_entropy = -tf.reduce_sum(y_*tf.log(y))
>>> train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
>>> 
>>> for i in range(1000):
...   batch = mnist.train.next_batch(50)
...   train_step.run(feed_dict={x: batch[0], y_: batch[1]})
... 
>>> correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
>>> accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
>>> print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
0.912
>>> 
>>> def weight_variable(shape):
...   initial = tf.truncated_normal(shape, stddev=0.1)
...   return tf.Variable(initial)
... 
>>> def bias_variable(shape):
...   initial = tf.constant(0.1, shape=shape)
...   return tf.Variable(initial)
... 
>>> def conv2d(x, W):
...   return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
... 
>>> def max_pool_2x2(x):
...   return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
...                         strides=[1, 2, 2, 1], padding='SAME')
... 
>>> W_conv1 = weight_variable([5, 5, 1, 32])
>>> b_conv1 = bias_variable([32])
>>> 
>>> x_image = tf.reshape(x, [-1,28,28,1])
>>> h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
>>> h_pool1 = max_pool_2x2(h_conv1)
>>> 
>>> W_conv2 = weight_variable([5, 5, 32, 64])
>>> b_conv2 = bias_variable([64])
>>> 
>>> h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
>>> h_pool2 = max_pool_2x2(h_conv2)
>>> 
>>> W_fc1 = weight_variable([7 * 7 * 64, 1024])
>>> b_fc1 = bias_variable([1024])
>>> 
>>> h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
>>> h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
>>> 
>>> keep_prob = tf.placeholder("float")
>>> h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
>>> 
>>> W_fc2 = weight_variable([1024, 10])
>>> b_fc2 = bias_variable([10])
>>> 
>>> y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
>>> 
>>> cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
>>> train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
>>> correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
>>> accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
>>> sess.run(tf.initialize_all_variables())
#下面設置的測試範圍爲10000,範圍越大,精度會越高。但範圍太大時可能會導致最後的print語句執行失敗(虛擬機性能問題)
>>> for i in range(10000):
...   batch = mnist.train.next_batch(50)
...   if i%100 == 0:
...     train_accuracy = accuracy.eval(feed_dict={
...         x:batch[0], y_: batch[1], keep_prob: 1.0})
...     print("step %d, training accuracy %g"%(i, train_accuracy))
...   train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
...
step 0, training accuracy 0.92
step 100, training accuracy 0.86
step 200, training accuracy 0.94
step 300, training accuracy 0.92
step 400, training accuracy 0.9
step 500, training accuracy 0.94
step 600, training accuracy 0.94
...
...
step 9400, training accuracy 0.92
step 9500, training accuracy 0.9
step 9600, training accuracy 0.98
step 9700, training accuracy 0.92
step 9800, training accuracy 0.98
step 9900, training accuracy 0.96
>>> 
>>> print ("test accuracy %g"%accuracy.eval(feed_dict={
...     x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
test accuracy 0.9195
>>>

7.參考文檔

  1. TensorFlow中文教程

  2. tensorflow安裝測試運行常見問題
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章