tensorflow 學習

1. Tensors TensorFlow的數據中央控制單元是tensor(張量),一個tensor由一系列的原始值組成,這些值被形成一個任意維數的數組。一個tensor的就是它的維度。

import tensorflow as tf

上面的是TensorFlow 程序典型的導入語句,作用是:賦予Python訪問TensorFlow類(classes),方法(methods),符號(symbols)

2. The Computational Graph TensorFlow核心程序由2個獨立部分組成:

    a:Building the computational graph構建計算圖

    b:Running the computational graph運行計算圖

一個computational graph(計算圖)是一系列的TensorFlow操作排列成一個節點圖。

node1 = tf.constant(3.0, dtype=tf.float32)
node2 = tf.constant(4.0)# also tf.float32 implicitly
print(node1, node2)

最後打印結果是:

Tensor("Const:0", shape=(), dtype=float32) Tensor("Const_1:0",shape=(), dtype=float32)

3. Session:要想打印最終結果,我們必須用到session:一個session封裝了TensorFlow運行時的控制和狀態

sess = tf.Session()
print(sess.run([node1, node2]))

我們可以組合Tensor節點操作(操作仍然是一個節點)來構造更加複雜的計算,

node3 = tf.add(node1, node2)
print("node3:", node3)
print("sess.run(node3):", sess.run(node3))

打印結果是:

node3:Tensor("Add:0", shape=(), dtype=float32)
sess.run(node3):7.0

TensorBoard展示該計算圖的圖片如下所示

4. placeholders: 一個計算圖可以參數化的接收外部的輸入,作爲一個placeholders(佔位符),一個佔位符是允許後面提供一個值的。

a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b  # + provides a shortcut for tf.add(a, b)

這裏有點像一個function (函數)或者lambda表達式,我們定義了2個輸入參數a和b,然後提供一個在它們之上的操作。我們可以使用feed_dict(傳遞字典)參數傳遞具體的值到run方法的佔位符來進行多個輸入,從而來計算這個圖。

print(sess.run(adder_node, {a:3, b:4.5}))
print(sess.run(adder_node, {a: [1,3], b: [2,4]}))

結果是:

7.5
[3.  7.]

我們可以增加另外的操作來讓計算圖更加複雜,比如

    add_and_triple = adder_node *3.
print(sess.run(add_and_triple, {a:3, b:4.5}))
輸出結果是:
22.5

5. variable: 訓練模型時,需要使用變量(Variables)保存和更新參數。Variables是包含張量(tensor)的內存緩衝。變量必須要先被初始化(initialize),而且可以在訓練時和訓練後保存(save)到磁盤中。之後可以再恢復(restore)保存的變量值來訓練和測試模型。 變量允許我們增加可訓練的參數到這個計算圖中,它們被構建成一個類型的初始值:(詳情參看:https://blog.csdn.net/muyiyushan/article/details/65442052

    W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W*x + b

當你調用tf.constant時常量被初始化,它們的值是不可以改變的,而變量當你調用tf.Variable時沒有被初始化,

在TensorFlow程序中要想初始化這些變量,你必須明確調用一個特定的操作,如下:

init = tf.global_variables_initializer()
sess.run(init)

要實現初始化所有全局變量的TensorFlow子圖的的處理是很重要的,直到我們調用sess.run,這些變量都是未被初始化的。

既然x是一個佔位符,我們就可以同時地對多個x的值進行求值linear_model,例如:

    print(sess.run(linear_model, {x: [1,2,3,4]}))
求值linear_model 
輸出爲
[0.  0.30000001  0.60000002  0.90000004]

我們已經創建了一個模型,但是我們至今不知道它是多好,在這些訓練數據上對這個模型進行評估,我們需要一個

y佔位符來提供一個期望的值,並且我們需要寫一個loss function(損失函數),一個損失函數度量當前的模型和提供

的數據有多遠,我們將會使用一個標準的損失模式來線性迴歸,它的增量平方和就是當前模型與提供的數據之間的損失

linear_model - y創建一個向量,其中每個元素都是對應的示例錯誤增量。這個錯誤的方差我們稱爲tf.square。然後

,我們合計所有的錯誤方差用以創建一個標量,用tf.reduce_sum抽象出所有示例的錯誤。

    y = tf.placeholder(tf.float32)
squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)
print(sess.run(loss, {x: [1,2,3,4], y: [0, -1, -2, -3]}))
輸出的結果爲
23.66

我們分配一個值給W和b(得到一個完美的值是-1和1)來手動改進這一點,一個變量被初始化一個值會調用tf.Variable

但是可以用tf.assign來改變這個值,例如:fixW = tf.assign(W, [-1.])

    fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])
print(sess.run(loss, {x: [1,2,3,4], y: [0, -1, -2, -3]}))
最終打印的結果是:
0.0

tf.train APITessorFlow提供optimizers(優化器),它能慢慢改變每一個變量以最小化損失函數,最簡單的優化器是

gradient descent(梯度下降),它根據變量派生出損失的大小,來修改每個變量。通常手工計算變量符號是乏味且容易出錯的,

因此,TensorFlow使用函數tf.gradients給這個模型一個描述,從而能自動地提供衍生品,簡而言之,優化器通常會爲你做這個。例如:

    optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
sess.run(init)# reset values to incorrect defaults.
for iin range(1000):
   sess.run(train, {x: [1,2,3,4], y: [0, -1, -2, -3]})
 
print(sess.run([W, b]))
輸出結果爲
[array([-0.9999969], dtype=float32), array([ 0.99999082], dtype=float32)]

現在你已經完成了實際的機器學習,儘管這個簡單的線性迴歸模型不要求太多TensorFlow core代碼,更復雜的模型和

方法將數據輸入到模型中,需要跟多的代碼,因此TensorFlow爲常見模式,結構和功能提供更高級別的抽象,我們將會

在下一個章節學習這些抽象。

tf.estimatortf.setimator是一個更高級別的TensorFlow庫,它簡化了機械式的機器學習,包含以下幾個方面:

  • running training loops 運行訓練循環
  • running evaluation loops 運行求值循環
  • managing data sets 管理數據集合

tf.setimator定義了很多相同的模型。

A custom modeltf.setimator沒有把你限制在預定好的模型中,假設我們想要創建一個自定義的模型,它不是由

TensorFlow建成的。我還是能保持這些數據集合,輸送,訓練高級別的抽象;例如:tf.estimator;

 

現在你有了關於TensorFlow的一個基本工作知識,我們還有更多教程,它能讓你學習更多。如果你是一個機器學習初學者,

你可以繼續學習MNIST for beginners,否則你可以學習Deep MNIST for experts.

import tensorflow as tf
node1 = tf.constant(3.0, dtype=tf.float32)
node2 = tf.constant(4.0)  # also tf.float32 implicitly
print(node1, node2)
 
sess = tf.Session()
print(sess.run([node1, node2]))
 
# from __future__ import print_function
node3 = tf.add(node1, node2)
print("node3:", node3)
print("sess.run(node3):", sess.run(node3))
 
 
# 佔位符
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b  # + provides a shortcut for tf.add(a, b)
 
print(sess.run(adder_node, {a: 3, b: 4.5}))
print(sess.run(adder_node, {a: [1, 3], b: [2, 4]}))
 
add_and_triple = adder_node * 3.
print(sess.run(add_and_triple, {a: 3, b: 4.5}))
 
 
# 多個變量求值
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W*x + b
 
#  變量初始化
init = tf.global_variables_initializer()
sess.run(init)
 
print(sess.run(linear_model, {x: [1, 2, 3, 4]}))
 
# loss function
y = tf.placeholder(tf.float32)
squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)
print("loss function", sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))
 
ss = (0-0)*(0-0) + (0.3+1)*(0.3+1) + (0.6+2)*(0.6+2) + (0.9+3)*(0.9+3)  # 真實算法
print("真實算法ss", ss)
 
print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, 0.3, 0.6, 0.9]}))  # 測試參數
 
# ft.assign 變量重新賦值
fixW = tf.assign(W, [-1.])
fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])
print(sess.run(linear_model, {x: [1, 2, 3, 4]}))
print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))
 
 
# tf.train API
optimizer = tf.train.GradientDescentOptimizer(0.01)  # 梯度下降優化器
train = optimizer.minimize(loss)    # 最小化損失函數
sess.run(init)  # reset values to incorrect defaults.
for i in range(1000):
  sess.run(train, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]})
 
print(sess.run([W, b]))
 
 
print("------------------------------------1")
 
# Complete program:The completed trainable linear regression model is shown here:完整的訓練線性迴歸模型代碼
# Model parameters
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
# Model input and output
x = tf.placeholder(tf.float32)
linear_model = W*x + b
y = tf.placeholder(tf.float32)
 
# loss
loss = tf.reduce_sum(tf.square(linear_model - y))  # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
 
# training data
x_train = [1, 2, 3, 4]
y_train = [0, -1, -2, -3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
  sess.run(train, {x: x_train, y: y_train})
 
# evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
 
 
print("------------------------------------2")
 
# tf.estimator  使用tf.estimator實現上述訓練
# Notice how much simpler the linear regression program becomes with tf.estimator:
# NumPy is often used to load, manipulate and preprocess data.
import numpy as np
import tensorflow as tf
 
# Declare list of features. We only have one numeric feature. There are many
# other types of columns that are more complicated and useful.
feature_columns = [tf.feature_column.numeric_column("x", shape=[1])]
 
# An estimator is the front end to invoke training (fitting) and evaluation
# (inference). There are many predefined types like linear regression,
# linear classification, and many neural network classifiers and regressors.
# The following code provides an estimator that does linear regression.
estimator = tf.estimator.LinearRegressor(feature_columns=feature_columns)
 
# TensorFlow provides many helper methods to read and set up data sets.
# Here we use two data sets: one for training and one for evaluation
# We have to tell the function how many batches
# of data (num_epochs) we want and how big each batch should be.
x_train = np.array([1., 2., 3., 4.])
y_train = np.array([0., -1., -2., -3.])
x_eval = np.array([2., 5., 8., 1.])
y_eval = np.array([-1.01, -4.1, -7, 0.])
input_fn = tf.estimator.inputs.numpy_input_fn(
    {"x": x_train}, y_train, batch_size=4, num_epochs=None, shuffle=True)
train_input_fn = tf.estimator.inputs.numpy_input_fn(
    {"x": x_train}, y_train, batch_size=4, num_epochs=1000, shuffle=False)
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
    {"x": x_eval}, y_eval, batch_size=4, num_epochs=1000, shuffle=False)
 
# We can invoke 1000 training steps by invoking the  method and passing the
# training data set.
estimator.train(input_fn=input_fn, steps=1000)
 
# Here we evaluate how well our model did.
train_metrics = estimator.evaluate(input_fn=train_input_fn)
eval_metrics = estimator.evaluate(input_fn=eval_input_fn)
print("train metrics: %r"% train_metrics)
print("eval metrics: %r"% eval_metrics)
 
 
print("------------------------------------3")
 
# A custom model:客戶自定義實現訓練
# Declare list of features, we only have one real-valued feature
def model_fn(features, labels, mode):
  # Build a linear model and predict values
  W = tf.get_variable("W", [1], dtype=tf.float64)
  b = tf.get_variable("b", [1], dtype=tf.float64)
  y = W*features['x'] + b
  # Loss sub-graph
  loss = tf.reduce_sum(tf.square(y - labels))
  # Training sub-graph
  global_step = tf.train.get_global_step()
  optimizer = tf.train.GradientDescentOptimizer(0.01)
  train = tf.group(optimizer.minimize(loss),
                   tf.assign_add(global_step, 1))
  # EstimatorSpec connects subgraphs we built to the
  # appropriate functionality.
  return tf.estimator.EstimatorSpec(
      mode=mode,
      predictions=y,
      loss=loss,
      train_op=train)
 
estimator = tf.estimator.Estimator(model_fn=model_fn)
# define our data sets
x_train = np.array([1., 2., 3., 4.])
y_train = np.array([0., -1., -2., -3.])
x_eval = np.array([2., 5., 8., 1.])
y_eval = np.array([-1.01, -4.1, -7., 0.])
input_fn = tf.estimator.inputs.numpy_input_fn(
    {"x": x_train}, y_train, batch_size=4, num_epochs=None, shuffle=True)
train_input_fn = tf.estimator.inputs.numpy_input_fn(
    {"x": x_train}, y_train, batch_size=4, num_epochs=1000, shuffle=False)
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
    {"x": x_eval}, y_eval, batch_size=4, num_epochs=1000, shuffle=False)
 
# train
estimator.train(input_fn=input_fn, steps=1000)
# Here we evaluate how well our model did.
train_metrics = estimator.evaluate(input_fn=train_input_fn)
eval_metrics = estimator.evaluate(input_fn=eval_input_fn)
print("train metrics: %r"% train_metrics)
print("eval metrics: %r"% eval_metrics)

參考:https://blog.csdn.net/lengguoxing/article/details/78456279

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章