tf.lin_space(start, stop, num, name=None)
create a sequence of num evenly-spaced values are generated beginning at start. If num > 1, the values in the sequence increase by (stop - start) / (num - 1), so that the last one is exactly stop.
comparable to but slightly different from numpy.linspace
lin = tf.lin_space(10.0, 13.0, 4, name="linspace")
with tf.Session() as sess:
print(sess.run(lin))
output:
[10. 11. 12. 13.]
tf.range([start], limit=None, delta=1, dtype=None, name=‘range’)
create a sequence of numbers that begins at start and extends by increments of delta up to but not including limit
slight different from range in Python
# 'start' is 3, 'limit' is 18, 'delta' is 3
tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]
# 'start' is 3, 'limit' is 1, 'delta' is -0.5
tf.range(start, limit, delta) ==> [3, 2.5, 2, 1.5]
# 'limit' is 5
tf.range(limit) ==> [0, 1, 2, 3, 4]
不像Numpy或者Python其他序列,TensorFlow序列不能迭代
for _ in np.linspace(0, 10, 4): # OK
for _ in tf.linspace(0.0, 10.0, 4): # TypeError: 'Tensor' object is not iterable.
for _ in range(4): # OK
for _ in tf.range(4): # TypeError: 'Tensor' object is not iterable.
創建變量
爲了聲明一個變量,你需要實例化一個tf.Variable,注意V大寫。
x = tf.Variable(...)
x.initializer # init
x.value() # read op
x.assign(...) # write op
x.assign_add(...)
# and more
創建變量的舊方式是調用tf.Variable(, name=)
s = tf.Variable(2, name="scalar")
m = tf.Variable([[0, 1], [2, 3]], name="matrix")
W = tf.Variable(tf.zeros([784,10]))
然而這種方式被TensorFlow摒棄了,推薦我們使用tf.get_variable來創建,因爲這樣可以更好地實現變量共享。通過tf.get_variable,我們可以internal name,shape,type和initializer提供給初始值。注意當我們通過tf.constant作爲initializer時,不需要提供shape
tf.get_variable(
name,
shape=None,
dtype=None,
initializer=None,
regularizer=None,
trainable=True,
collections=None,
caching_device=None,
partitioner=None,
validate_shape=True,
use_resource=None,
custom_getter=None,
constraint=None
)
s = tf.get_variable("scalar", initializer=tf.constant(2))
m = tf.get_variable("matrix", initializer=tf.constant([[0, 1], [2, 3]]))
W = tf.get_variable("big_matrix", shape=(784, 10), initializer=tf.zeros_initializer())
初始化變量
在使用一個變量之前,需要先初始化,否則將會報錯:FailedPreconditionError: Attempting to use uninitialized value.可以通過以下語句來打印出沒有初始化的變量:
print(tf.Session().run(tf.report_uninitialized_variables()))
(1)初始化所有的變量:tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
(2)初始化部分變量tf.variables_initializer()
with tf.Session() as sess:
sess.run(tf.variables_initializer([a, b]))
(3) 初始化單個變量tf.Variable.initializer
with tf.Session() as sess:
sess.run(W.initializer)
查看變量的值
從session取出值
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run([s, m, W]))
output:
[2, array([[0, 1],
[2, 3]], dtype=int32), array([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]], dtype=float32)]
通過tf.Variable.eval()取出值
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(s.eval(), m.eval())
output:
2 [[0 1]
[2 3]]
變量賦值
通過tf.Variable.assign()
W = tf.Variable(10)
W.assign(100)
with tf.Session() as sess:
sess.run(W.initializer)
print(W.eval()) # >> 10
爲什麼輸出的是10而不是100呢?**W.assign(100)並沒有將100賦值給 W,而是創建了一個assign op.**爲了使這個op起到效果,我們需要在session運行op
W = tf.Variable(10)
assign_op = W.assign(100)
with tf.Session() as sess:
sess.run(assign_op)
print(W.eval()) # >> 100
注意此時我們不必去初始化W,因爲assign()已經爲我們實現。
# create a variable whose original value is 2
a = tf.get_variable('scalar', initializer=tf.constant(2))
a_times_two = a.assign(a * 2)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(a_times_two) # >> 4
sess.run(a_times_two) # >> 8
sess.run(a_times_two) # >> 16
tf.Variable.assign_add() 和 tf.Variable.assign_sub(),這兩個操作需要初始化
W = tf.Variable(10)
with tf.Session() as sess:
sess.run(W.initializer)
print(sess.run(W.assign_add(10))) # >> 20
print(sess.run(W.assign_sub(2))) # >> 18
session之間的變量相互獨立
W = tf.Variable(10)
sess1 = tf.Session()
sess2 = tf.Session()
sess1.run(W.initializer)
sess2.run(W.initializer)
print(sess1.run(W.assign_add(10))) # >> 20
print(sess2.run(W.assign_sub(2))) # >> 8
print(sess1.run(W.assign_add(100))) # >> 120
print(sess2.run(W.assign_sub(50))) # >> -42
sess1.close()
sess2.close()
導入數據
placeholder和feed_dict(舊方法)
TensorFlow程序執行通常包括兩個階段
1:聲明一個graph
2:使用一個session來執行計算,來評估圖中的變量
聲明graphs的時候,不需要知道計算所需變量的值,這如同實名一個關於x和y的函數:f(x,y)=2x+y, 其中x和y是實際值的佔位符(placeholder)
graph創建之後,我們需要稍後再提供值的時候來定義一個placeholder:
tf.placeholder(dtype, shape=None, name=None)
需要注意的是dtype和shape需要自己聲明的;當shape=none的時候, 表示可以接受任意shape的張量tensors。
a = tf.placeholder(tf.float32, shape=[3]) # a is placeholder for a vector of 3 elements
b = tf.constant([5, 5, 5], tf.float32)
c = a + b # use the placeholder as you would any tensor
with tf.Session() as sess:
print(sess.run(c))
此時執行上面程序會報錯,因爲還沒有提供給a值,
下面使用feed_dict把值傳給a
with tf.Session() as sess:
# compute the value of c given the value of a is [1, 2, 3]
print(sess.run(c, feed_dict={a: [1, 2, 3]})) # [6. 7. 8.]
tf.data(新方法)
相比tf.placeholder和feed_dict提高了運算性能。
批標準化
批標準化(batch normalization,BN)一般用在激活函數之前,使結果x=Wx+b 各個維度均值爲0,方差爲1。通過規範化讓激活函數分佈在線性區間,讓每一層的輸入有一個穩定的分佈會有利於網絡的訓練。
優點:
- 加大探索步長,加快收斂速度。
- 更容易跳出局部極小。
- 破壞原來的數據分佈,一定程度上防止過擬合。
- 解決收斂速度慢和梯度爆炸。
tensorflow相應API
mean, variance = tf.nn.moments(x, axes, name=None, keep_dims=False)
計算統計矩,mean 是一階矩即均值,variance 則是二階中心矩即方差,axes=[0]表示按列計算
tf.nn.batch_normalization(x, mean, variance, offset, scale, variance_epsilon, name=None)
tf.nn.batch_norm_with_global_normalization(x, mean, variance, beta, gamma, variance_epsilon, scale_after_normalization, name=None);
tf.nn.moments 計算返回的 mean 和 variance 作爲 tf.nn.batch_normalization 參數調用
tensorflow及python實現
import tensorflow as tf
W = tf.constant([[-2.,12.,6.],[3.,2.,8.]], )
mean,var = tf.nn.moments(W, axes = [0])
with tf.Session() as sess:
sess.run(tf.global_variables_initializer()) # #必須要加這句不然執行多次sess會報錯
resultMean = sess.run(mean)
print(resultMean)
resultVar = sess.run(var)
print(resultVar)
[ 0.5 7. 7. ]
[ 6.25 25. 1. ]
tf.nn.embedding_lookup()
主要是選取一個張量裏面索引對應的元素。
tf.nn.embedding_lookup(params, ids)
:
params可以是張量也可以是數組等,
id就是對應的索引
其他的參數不介紹。
ids只有一行:
#c = np.random.random([10, 1]) # 隨機生成一個10*1的數組
#b = tf.nn.embedding_lookup(c, [1, 3])#查找數組中的序號爲1和3的
p=tf.Variable(tf.random_normal([10,1]))#生成10*1的張量
b = tf.nn.embedding_lookup(p, [1, 3])#查找張量中的序號爲1和3的
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(b))
#print(c)
print(sess.run(p))
print(p)
print(type(p))
[[0.15791859]
[0.6468804 ]]
[[-0.2737084 ]
[ 0.15791859]
[-0.01315552]
[ 0.6468804 ]
[-1.4090979 ]
[ 2.1583703 ]
[ 1.4137447 ]
[ 0.20688428]
[-0.32815856]
[-1.0601649 ]]
<tf.Variable 'Variable:0' shape=(10, 1) dtype=float32_ref>
<class 'tensorflow.python.ops.variables.Variable'>
如果ids是多行:
import tensorflow as tf
import numpy as np
a = [[0.1, 0.2, 0.3], [1.1, 1.2, 1.3], [2.1, 2.2, 2.3], [3.1, 3.2, 3.3], [4.1, 4.2, 4.3]]
a = np.asarray(a)
idx1 = tf.Variable([0, 2, 3, 1], tf.int32)
idx2 = tf.Variable([[0, 2, 3, 1], [4, 0, 2, 2]], tf.int32)
out1 = tf.nn.embedding_lookup(a, idx1)
out2 = tf.nn.embedding_lookup(a, idx2)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
print (sess.run(out1))
print(out1)
print ('==================')
print(sess.run(out2))
print(out2)
[[ 0.1 0.2 0.3]
[ 2.1 2.2 2.3]
[ 3.1 3.2 3.3]
[ 1.1 1.2 1.3]]
Tensor("embedding_lookup:0", shape=(4, 3), dtype=float64)
==================
[[[ 0.1 0.2 0.3]
[ 2.1 2.2 2.3]
[ 3.1 3.2 3.3]
[ 1.1 1.2 1.3]]
[[ 4.1 4.2 4.3]
[ 0.1 0.2 0.3]
[ 2.1 2.2 2.3]
[ 2.1 2.2 2.3]]]
Tensor("embedding_lookup_1:0", shape=(2, 4, 3), dtype=float64)
參考:https://www.cnblogs.com/gaofighting/p/9625868.html
https://www.jianshu.com/p/2a822b0ce042