TensorFlow計算模型--計算圖

計算圖的概念

TensorFlow兩個重要概念:Tensor和Flow,Tensor就是張量(可以理解爲多維數組),Flow就是計算相互轉化的過程。TensorFlow的計算方式類似Spark的有向無環圖(DAG),在創建Session之後纔開始計算(類似Action算子)。

簡單示例

import tensorflow as tf 
a = tf.constant([1.0,2.0],name="a")
b = tf.constant([3.0,4.0],name="b")
result = a + b
sess = tf.Session()
print(sess.run(result)) # [ 4.  6.]

TensorFlow數據模型--張量

張量的概念

張量可以簡單理解爲多維數組。 零階張量表示標量(scalar),也就是一個數。一階張量表示爲向量(vector),也就是一維數組。n階張量表示爲n維數組。但張量在TensorFlow中只是對結算結果的引用,它保存的是如何得到這些數字的計算過程。

import tensorflow as tf
a = tf.constant([1.0,2.0],name="a")
b = tf.constant([3.0,4.0],name="b")
result = a + b
print(result)
# Tensor("add_1:0", shape=(2,), dtype=float32)

上面輸出了三個屬性:名字(name)、維度(shape)、類型(type)
張量的第一個屬性名字是張量的唯一標識符,也顯示出這個張量是如何計算出來的
張量的第二個屬性維度是張量的維度信息,上面輸出結果shape(2,)表示是一個一維數組,長度爲2
張量的第三個屬性類型是每個張量都會有的唯一類型,TensorFlow會對所有參與運算的張量進行類型檢查,如果類型不匹配會報錯。

TensorFlow運行模型--會話

創建會話的兩種方式

# 創建一個會話
sess = tf.Session()
sess.run()
sess.cloes()
# 這種創建會話的方式需要顯示關閉會話,釋放資源

# 使用python 上下位管理器來管理這個會話
with tf.Session() as sess:
    sess.run()
# 不需要顯示調用"sess.close()"函數來關閉會話
# 當上下文退出時會話關閉和資源釋放也自動完成了

TensorFlow會生成一個默認的計算圖,可以通過tf.Tensor.eval函數來計算一個張量的取值


import tensorflow as tf
a = tf.constant([1.0,2.0],name="a")
b = tf.constant([3.0,4.0],name="b")
result = a + b
with tf.Session() as sess:
    # 兩種方式計算張量的取值
    print(sess.run(result))
    print(result.eval(session=sess))

神經網絡參數與TenworFlow變量

變量(tf.Variable)的作用就是保存和更新神經網絡中的參數

# 聲明一個2 * 3 的矩陣變量,矩陣均值爲0,標準差爲2的隨機數
import tensorflow as tf
weights = tf.Variable(tf.random_normal([2,3],stddev=2))
# 初始化變量
init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    print(sess.run(weights))
    # [[-0.69297457  1.13187325  2.36984086]
    #  [ 1.20076609  0.77468276  2.01622796]]
TensorFlow隨機數生成函數

TensorFlow常數生成函數

神經網絡程序

import tensorflow as tf
from numpy.random import RandomState

# 定義訓練數據batch的大小
batch_size = 8

# 定義神經網絡的參數
w1 = tf.Variable(tf.random_normal([2,3],stddev=1,seed=1))
w2 = tf.Variable(tf.random_normal([3,1],stddev=1,seed=1))

# 在shape的一個維度上使用None可以方便使用不同的batch大小
x = tf.placeholder(tf.float32,shape=(None,2),name='x-input')
y_ = tf.placeholder(tf.float32,shape=(None,1),name='y-input')

# 定義神經網絡前向傳播的過程
a = tf.matmul(x,w1)
y = tf.matmul(a,w2)

# 定義損失函數和反響傳播算法
cross_entropy = -tf.reduce_mean(y_ * tf.log(tf.clip_by_value(y,1e-10,1.0)))
train_step = tf.train.AdamOptimizer(0.001).minimize(cross_entropy)

# 通過隨機數生成一個模擬數據集
rdm = RandomState(1)
dataset_size = 128
X = rdm.rand(dataset_size,2)

# 定義規則來給出樣本的標籤,x1+x2<1的樣例都被認爲是正樣本,其他爲負樣本,0:負樣本,1:正樣本
Y = [[int(x1+x2<1)] for (x1,x2) in X]
# 創建一個會話來運行TensorFlow程序
with tf.Session() as sess:
    # 初始化變量
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    
    print(sess.run(w1))
    print(sess.run(w2))
    
    # 設定訓練的輪數
    STEPS = 5000
    
    for i in range(STEPS):
        # 每次選取batch_size 個樣本進行訓練
        start = (i * batch_size)% dataset_size
        end = min(start+batch_size,dataset_size)
        
        # 通過選取的樣本訓練神經網絡並更新參數
        sess.run(train_step,feed_dict={x:X[start:end],y_:Y[start:end]})
        if i % 1000 == 0:
            total_cross_entropy = sess.run(cross_entropy,feed_dict={x:X,y_:Y})
            print("After %d trainint step(s),cross entropy on all data is %g" % (i,total_cross_entropy))
            
    print(sess.run(w1))
    print(sess.run(w2))

訓練神經網絡的過程可以分爲3個步驟:

  1. 定義神經網絡的結構和前向傳播的輸出結果
  2. 定義損失函數以及選擇反向傳播優化的算法
  3. 生成會話(tf.Session)並在訓練數據上彷彿運行反向傳播優化算法

tensorflow實現線性迴歸

'''
A linear regression learning algorithm example using TensorFlow library.

Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
'''

from __future__ import print_function

import tensorflow as tf
import numpy
import matplotlib.pyplot as plt
rng = numpy.random

# Parameters
learning_rate = 0.01
training_epochs = 1000
display_step = 50

# Training Data
train_X = numpy.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,
                         7.042,10.791,5.313,7.997,5.654,9.27,3.1])
train_Y = numpy.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,
                         2.827,3.465,1.65,2.904,2.42,2.94,1.3])
n_samples = train_X.shape[0]

# tf Graph Input
X = tf.placeholder("float")
Y = tf.placeholder("float")

# Set model weights
W = tf.Variable(rng.randn(), name="weight")
b = tf.Variable(rng.randn(), name="bias")

# Construct a linear model
pred = tf.add(tf.multiply(X, W), b)

# Mean squared error
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
# Gradient descent
#  Note, minimize() knows to modify W and b because Variable objects are trainable=True by default
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initializing the variables
init = tf.global_variables_initializer()

# Launch the graph
with tf.Session() as sess:
    sess.run(init)

    # Fit all training data
    for epoch in range(training_epochs):
        for (x, y) in zip(train_X, train_Y):
            sess.run(optimizer, feed_dict={X: x, Y: y})

        # Display logs per epoch step
        if (epoch+1) % display_step == 0:
            c = sess.run(cost, feed_dict={X: train_X, Y:train_Y})
            print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c), \
                "W=", sess.run(W), "b=", sess.run(b))

    print("Optimization Finished!")
    training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
    print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n')

    # Graphic display
    plt.plot(train_X, train_Y, 'ro', label='Original data')
    plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')
    plt.legend()
    plt.show()

    # Testing example, as requested (Issue #2)
    test_X = numpy.asarray([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1])
    test_Y = numpy.asarray([1.84, 2.273, 3.2, 2.831, 2.92, 3.24, 1.35, 1.03])

    print("Testing... (Mean square loss Comparison)")
    testing_cost = sess.run(
        tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * test_X.shape[0]),
        feed_dict={X: test_X, Y: test_Y})  # same function as cost above
    print("Testing cost=", testing_cost)
    print("Absolute mean square loss difference:", abs(
        training_cost - testing_cost))

    plt.plot(test_X, test_Y, 'bo', label='Testing data')
    plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')
    plt.legend()
    plt.show()

tensorflow實現邏輯迴歸

'''
A logistic regression learning algorithm example using TensorFlow library.
This example is using the MNIST database of handwritten digits
(http://yann.lecun.com/exdb/mnist/)

Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
'''

from __future__ import print_function

import tensorflow as tf

# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

# Parameters
learning_rate = 0.01
training_epochs = 25
batch_size = 100
display_step = 1

# tf Graph Input
x = tf.placeholder(tf.float32, [None, 784]) # mnist data image of shape 28*28=784
y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes

# Set model weights
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))

# Construct model
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax

# Minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initializing the variables
init = tf.global_variables_initializer()

# Launch the graph
with tf.Session() as sess:
    sess.run(init)

    # Training cycle
    for epoch in range(training_epochs):
        avg_cost = 0.
        total_batch = int(mnist.train.num_examples/batch_size)
        # Loop over all batches
        for i in range(total_batch):
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            # Run optimization op (backprop) and cost op (to get loss value)
            _, c = sess.run([optimizer, cost], feed_dict={x: batch_xs,
                                                          y: batch_ys})
            # Compute average loss
            avg_cost += c / total_batch
        # Display logs per epoch step
        if (epoch+1) % display_step == 0:
            print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))

    print("Optimization Finished!")

    # Test model
    correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
    # Calculate accuracy
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))

tensorflow實現K-近鄰

'''
A nearest neighbor learning algorithm example using TensorFlow library.
This example is using the MNIST database of handwritten digits
(http://yann.lecun.com/exdb/mnist/)

Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
'''

from __future__ import print_function

import numpy as np
import tensorflow as tf

# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

# In this example, we limit mnist data
Xtr, Ytr = mnist.train.next_batch(5000) #5000 for training (nn candidates)
Xte, Yte = mnist.test.next_batch(200) #200 for testing

# tf Graph Input
xtr = tf.placeholder("float", [None, 784])
xte = tf.placeholder("float", [784])

# Nearest Neighbor calculation using L1 Distance
# Calculate L1 Distance
distance = tf.reduce_sum(tf.abs(tf.add(xtr, tf.negative(xte))), reduction_indices=1)
# Prediction: Get min distance index (Nearest neighbor)
pred = tf.arg_min(distance, 0)

accuracy = 0.

# Initializing the variables
init = tf.global_variables_initializer()

# Launch the graph
with tf.Session() as sess:
    sess.run(init)

    # loop over test data
    for i in range(len(Xte)):
        # Get nearest neighbor
        nn_index = sess.run(pred, feed_dict={xtr: Xtr, xte: Xte[i, :]})
        # Get nearest neighbor class label and compare it to its true label
        print("Test", i, "Prediction:", np.argmax(Ytr[nn_index]),"True Class:", np.argmax(Yte[i]))
        # Calculate accuracy
        if np.argmax(Ytr[nn_index]) == np.argmax(Yte[i]):
            accuracy += 1./len(Xte)
    print("Done!")
    print("Accuracy:", accuracy)
筆記來自<< TensorFlow:實戰Google深度學習框架 >>

發佈了31 篇原創文章 · 獲贊 111 · 訪問量 25萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章