TF基本概念

TF基本概念

Graph 表示計算任務

Node 可以是Operation也可以是數據存儲容器

在session的context中執行graph

使用tensor表示數據

通過variable維護狀態

使用feed和fetch 爲任意操作賦值(arbitrary operation)或者從中獲取數據

Tensor 類似於numpy 中的數組

3# a rank 0 tensor; this is a scalar withshape []
[1. ,2., 3.] # a rank 1tensor; this is a vector with shape [3]
[[1., 2., 3.], [4., 5., 6.]] # a rank 2tensor; a matrix with shape [2, 3]
[[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3tensor with shape [2, 1, 3]


Computational Graph包括兩步

1、  Building the computational graph.

將TF操作轉換成Graph nodes的形式,每個node包括input Tensor 和 output Tensor;constant node 只有固定輸入沒有輸出

Eg:

import tensorflow as tf
node1 = tf.constant(3.0, tf.float32)
node2 = tf.constant(4.0) # also tf.float32 implicitly
print node1, node2

[out]

 

Tensor("Const:0", shape=TensorShape([]), dtype=float32)Tensor("Const_1:0", shape=TensorShape([]), dtype=float32)

sess = tf.Session()
print sess.run([node1,node2])

 

[out]

[3.0, 4.0]

 

將上述兩個node 相加產生新的node 並輸出computation graph

node3 = tf.add(node1, node2)
print "node3: ", node3
print "sess.run(node3):",sess.run(node3)


[out]

node3: Tensor("Add:0", shape=TensorShape([]), dtype=float32)

sess.run(node3):  7.0

 

Placeholders可以不需要在定義的時候賦值,可以隨後賦值

EG:

IN:

a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b  # + provides a shortcut for tf.add(a, b)
 
print sess.run(adder_node, {a: 3, b:4.5})
print sess.run(adder_node, {a: [1,3], b:[2, 4]})


OUT:

7.5

[ 3. 7.]

 

在此基礎上在加一個graph

IN:

add_add_triple = adder_node * 3.
print sess.run(add_add_triple,{a:3,b:4.5})

OUT:

22.5

 

2、Running thecomputational graph.

爲了計算node 值(3.0)(4.0)必須使用Session

 

 

Variables 可以將trainableparameters 加到graph中

構造Variable需要類型和初值

 

IN:

 

W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)
x = tf.placeholder(tf.float32)
 
linear_model = W * x +b

 

#variable initialize needs call a specialoperation

 

init = tf.initialize_all_variables()
 
sess.run(init)
 
print sess.run(linear_model,{x:[1,2,3,4]})
 

OUT:

[ 0.          0.30000001  0.60000002 0.90000004]

 

計算loss func

IN:

 

############ y and calculate loss function

y = tf.placeholder(tf.float32)
squared_deltas = tf.square(linear_model -y)
loss = tf.reduce_sum(squared_deltas)
 
printsess.run(loss,{x:[1,2,3,5],y:[0,-1,-2,-3]})


OUT:

26.09

 

手動調參 tf.assign

####adjust the parametres by hand

 

IN:

fixW = tf.assign(W,[-1.])
fixb = tf.assign(b,[1.])
sess.run([fixW, fixb])
printsess.run(loss,{x:[1,2,3,4],y:[0,-1,-2,-3]})
 

OUT:

0.0

 

自動調參數gradient decent

 

IN:

 

#gradientDescent optimeze the parameter

 

optimizer =tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
 
sess.run(init)
 
for i in range(1000):
   sess.run(train,{x:[1,2,3,4],y:[0,-1,-2,-3]})
 
print sess.run([W,b])


 

[array([-0.9999969], dtype=float32),array([ 0.99999082], dtype=float32)]

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章