TensorFlow中的learning_rate_decay.py文件中查看更多
指數衰減
def exponential_decay(learning_rate, global_step, decay_steps, decay_rate,
staircase=False, name=None)
global_step是當前的步數,decay_steps是總共需要多少步下降到l_r*d_r的地方
staircase爲True時,global_step/decay_steps是整數除法,階梯狀下降
learning_rate每隔decay_step步下降
decay_rate越小,下降越快
global_step = tf.Variable(0, trainable=False)
starter_learning_rate = 0.1
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step=global_step, decay_steps=10000,
decay_rate=0.96, staircase=True)
learning_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
多項式衰減
def polynomial_decay(learning_rate, global_step, decay_steps,
end_learning_rate=0.0001, power=1.0, cycle=False, name=None)
cycle爲學習率下降後是否重新上升
注意,這裏:global_step = min(global_step, decay_steps)
learning_rate
在decay_steps
步內,到達end_learning_rate
#decay from 0.1 to 0.01 in 10000 steps using sqrt(i.e. power=0.5)
global_step = tf.Variable(0, trainable=False)
starter_learning_rate = 0.1
end_learning_rate = 0.01
learning_rate = tf.train.polynomial_decay(starter_learning_rate, global_step=global_step, decay_steps=10000,
end_learning_rate=end_learning_rate, power=0.5)
learning_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
自然指數衰減
def natural_exp_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)
# decay exponentially with a base of 0.96:
global_step = tf.Variable(0, trainable=False)
learning_rate = 0.1
k = 0.5
learning_rate = tf.train.exponential_time_decay(learning_rate, global_step, k)
learning_step = (
tf.train.GradientDescentOptimizer(learning_rate)
.minimize(...my loss..., global_step=global_step)
)
decay_rate越大,下降越快
時間翻轉衰減
def inverse_time_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)
# decay 1/t with a rate of 0.5:
global_step = tf.Variable(0, trainable=False)
learning_rate = 0.1
decay_steps = 1.0
decay_rate = 0.5
learning_rate = tf.train.inverse_time_decay(learning_rate, global_step,
decay_steps, decay_rate)
learning_step = (
tf.train.GradientDescentOptimizer(learning_rate)
.minimize(...my loss..., global_step=global_step))
比較:
conv+relu+maxpool+linear_fc
下面是不同的初始學習率對網絡影響的比較:
lr=0.1再跑一遍,發現初始損失變化較大:
說明學習率越大,網絡的表現越不穩定