Tensorflow函數用法(持續擴充)
1. tf.clip_by_value
tf.clip\_by\_value(v,min,max)
:
給定一個張量v,將張量v中地每一個元素壓縮到[min,max]的值域內。(小於min的置爲min,大於max的置爲max)。
2. tf.reduce_mean
沿着張量的指定的軸(某一維度),計算張量中元素的平均值。
#Computes the mean of elements across dimensions of a tensor
def reduce_mean(input_tensor,
axis=None,
keepdims=None,
name=None,
reduction_indices=None,
keep_dims=None):
Args:
_ input_tensor : The tensor to reduce. Should have numeric type.
axis : The dimensions to reduce. IfNone
(the default), reduces all dimensions. Must be in the range[-rank(input_tensor), rank(input_tensor)]
.
keepdims : If true, retains reduced dimensions with length 1.
_ name : A name for the operation (optional).
_ reduction_indices : The old (deprecated) name for axis.
_ keep_dims : Deprecated alias forkeepdims
.
Returns:
The reduced tensor.
x = tf.constant([1, 0, 1, 0])
tf.reduce_mean(x) # 0
y = tf.constant([1., 0., 1., 0.])
tf.reduce_mean(y) # 0.5
x = tf.constant([[1., 1.], [2., 2.]])
tf.reduce_mean(x) # 1.5
tf.reduce_mean(x, 0) # [1.5, 1.5],以橫軸爲基準,對橫軸每一維所包含的列元素求平均。簡單來說,對行(第0維)做壓縮。
tf.reduce_mean(x, 1) # [1., 2.],壓縮列(第1維)。
3. cross_entropy(交叉熵)
- tf.nn.sigmoid_cross_entropy_with_logits
- tf.nn.softmax_cross_entropy_with_logits
- tf.nn.sparse_softmax_cross_entropy_with_logits
- tf.nn.weighted_cross_entropy_with_logits
4. tf.matmul(v1,v2):
矩陣相乘,與*不同。
*的結果爲每個元素對應位置上的乘積
matmul
爲矩陣相乘
5. tf.where/tf.greater
import tensorflow as tf
v1=tf.constant([1.,2.,3.])
v2=tf.constant([3.,1.,4.])
with tf.Session() as sess:
Great=tf.greater(v1,v2)
print (sess.run(Great))
#[False True False]
Where=tf.where(Great,v1,v2)
print(Where)
#Tensor("Select:0", shape=(3,), dtype=float32)
print(sess.run(Where))
#[ 3. 2. 4.]
6. tf.train.exponential_decay:
#Applies exponential decay to the learning rate.
def exponential_decay(learning_rate,
global_step,
decay_steps,
decay_rate,
staircase=False,
name=None):
Returns:
The function returns the decayed learning rate. It is computed as:
decayed_learning_rate = learning_rate *
decay_rate ^ (global_step / decay_steps)
Args:
- learning_rate: A scalar
float32
orfloat64
Tensor
or a Python number. The initial learning rate. - global_step: A scalar
int32
orint64
Tensor
or a Python number. Global step to use for the decay computation. Must not be negative. - decay_steps: A scalar
int32
orint64
Tensor
or a Python number. Must be positive. See the decay computation above. - decay_rate: A scalar
float32
orfloat64
Tensor
or a Python number. The decay rate. staircase: Boolean. IfTrue
decay the learning rate at discrete intervals - name: String. Optional name of the operation. Defaults to ‘ExponentialDecay’.
7. tf.argmax:
#Returns the index with the largest value across dimensions of a tensor
def argmax(input,
axis=None,
name=None,
dimension=None,
output_type=dtypes.int64):
說明:tf\.argmax(V,1)
:
V代表一個張量1表示選取最大值的操作僅在第1個維度上進行,即只在每一行選取最大值對應的下標。
實例:
import tensorflow as tf
V=tf.constant([[1,2,3],[2,3,4]])
Max=tf.argmax(V,1)
print(Max.eval(session=tf.Session()))
#[2 2] 結果存儲的是每一行的最大值對應的下標值
Max2=tf.argmax(V,0)
print(Max2.eval(session=tf.Session()))
#[1 1 1] 結果存儲的是每一列的最大值對應的下標值