學習率表示爲每次參數更新的幅度
wn+1(更新後的參數)=wn(當前參數)-learning_rate*損失函數的導數
例子:
得到loss函數斜率最小的點。
不知道爲什麼不直接計算梯度令其爲0,是不是因爲可能局部最小點。。
#tf_3_5.py
#設損失函數loss=(w+1)^2,令w初值爲5,反向傳播求最優w
import tensorflow as tf
w=tf.Variable(tf.constant(5,dtype=tf.float32))
loss = tf.square(w+1)
train_step=tf.train.GradientDescentOptimizer(0.2).minimize(loss)
with tf.Session() as sess:
init_op=tf.global_variables_initializer()
sess.run(init_op)
for i in range(10):
sess.run(train_step)
w_val=sess.run(w)
loss_val=sess.run(loss)
print("w is",w_val,"loss is ",loss_val)
學習率的設置:
如果令learning_rate=1,振盪不收斂,爲0.001時學習率小了收斂速度太慢,要迭代太多次
所以最好選擇合適的學習率,既不會收斂振盪也不會收斂太慢。。(根據實際情況)
指數衰減學習率
跟batchsize有關
learning_rate=learning_rate_base*learning_rate_decay(衰減率=總樣本數/batch_size)
global_step=tf.Variable(0,trainable=False)
learning_rate=tf.train.exponential_decay(LEARNING_RATE_BASE,global_step,LEARNING_RATE_STEP,LEARNING_RATE_DECAY,staircase=True)
#tf_3_6.py
#使用指數衰減的學習率,在迭代初期得到較高的下降速度,可以在較小的訓練輪數下取得更有收斂度的值
import tensorflow as tf
LEARNINT_RATE_BASE=0.1
LEARNING_RATE_DECAY=0.99
LEARNING_RATE_STEP=2#喂入多少輪batchsize後,更新一次學習率
#當前輪數
global_step=tf.Variable(0,trainable=False)
#指數衰減的學習率
learning_rate=tf.train.exponential_decay(LEARNINT_RATE_BASE,global_step,LEARNING_RATE_STEP,LEARNING_RATE_DECAY,staircase=True)
w=tf.Variable(tf.constant(5,dtype=tf.float32))
loss=tf.square(w+1)
train_step=tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step)
with tf.Session() as sess:
init_op=tf.global_variables_initializer()
sess.run(init_op)
for i in range(40):
sess.run(train_step)
learning_rate_val=sess.run(learning_rate)
global_step_val=sess.run(global_step)
w_val=sess.run(w)
loss_val=sess.run(loss)
print("w is",w_val,"learning_rate_val is",learning_rate_val,"global_step_val is",global_step_val,"loss_val is",loss_val)