有關公式、基本理論等大量內容摘自《動手學深度學習》(TF2.0版))
優化與深度學習
優化在深度學習中有很多挑戰。下面描述了其中的兩個挑戰,即局部最小值和鞍點。
局部最小值
運行代碼:
import sys
import tensorflow as tf
sys.path.append("..")
from mpl_toolkits import mplot3d # 三維畫圖
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from IPython import display
from matplotlib import pyplot as plt
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
#局部最小值
def f(x):
return x * np.cos(np.pi * x)
set_figsize((4.5, 2.5))
x = np.arange(-1.0, 2.0, 0.1)
fig, = plt.plot(x, f(x))
fig.axes.annotate('local minimum', xy=(-0.3, -0.25), xytext=(-0.77, -1.0),
arrowprops=dict(arrowstyle='->'))
fig.axes.annotate('global minimum', xy=(1.1, -0.95), xytext=(0.6, 0.8),
arrowprops=dict(arrowstyle='->'))
plt.xlabel('x')
plt.ylabel('f(x)')
plt.show()
結果:
鞍點
代碼:
import sys
import tensorflow as tf
sys.path.append("..")
from mpl_toolkits import mplot3d # 三維畫圖
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from IPython import display
from matplotlib import pyplot as plt
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
#鞍點
x = np.arange(-2.0, 2.0, 0.1)
fig, = plt.plot(x, x**3)
fig.axes.annotate('saddle point', xy=(0, -0.2), xytext=(-0.52, -5.0),
arrowprops=dict(arrowstyle='->'))
plt.xlabel('x')
plt.ylabel('f(x)')
plt.show()
結果:
代碼:
import sys
import tensorflow as tf
sys.path.append("..")
from mpl_toolkits import mplot3d # 三維畫圖
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from IPython import display
from matplotlib import pyplot as plt
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
#馬鞍面
x, y = np.mgrid[-1: 1: 31j, -1: 1: 31j]
z = x**2 - y**2
ax = plt.figure().add_subplot(111, projection='3d')
ax.plot_wireframe(x, y, z, **{'rstride': 2, 'cstride': 2})
ax.plot([0], [0], [0], 'rx')
ticks = [-1, 0, 1]
plt.xticks(ticks)
plt.yticks(ticks)
ax.set_zticks(ticks)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
結果:
如果三維圖片生成結果爲一條線,如圖所示:
取消下列選項即可生成三維圖片
小結:
- 由於優化算法的目標函數通常是一個基於訓練數據集的損失函數,優化的目標在於降低訓練誤差。
- 由於深度學習模型參數通常都是高維的,目標函數的鞍點通常比局部最小值更常見。
下面來談談具體的優化算法~~
梯度下降
一維梯度下降
代碼如下:
import numpy as np
import tensorflow as tf
import math
from matplotlib import pyplot as plt
import sys
from IPython import display
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
def gd(eta):
x = 10
results = [x]
for i in range(10):
x -= eta * 2 * x # f(x) = x * x的導數爲f'(x) = 2 * x
results.append(x)
print('epoch 10, x:', x)
return results
res = gd(0.2)
def show_trace(res,s=None):
s=str(s)
n = max(abs(min(res)), abs(max(res)), 10)
f_line = np.arange(-n, n, 0.1)
set_figsize()
plt.plot(f_line, [x * x for x in f_line])
plt.plot(res, [x * x for x in res], '-o')
plt.xlabel('x'+':'+s)
plt.ylabel('f(x)')
plt.show()
show_trace(gd(0.1),0.1)
show_trace(gd(0.3),0.3)
show_trace(gd(0.5),0.5)
show_trace(gd(0.7),0.7)
show_trace(gd(0.9),0.9)
show_trace(gd(1.1),1.1)
show_trace(gd(1.3),1.3)
通過不同的學習率我們來進行觀察
epoch 10, x: 0.06046617599999997
epoch 10, x: 1.0737418240000003
epoch 10, x: 0.0010485760000000007
epoch 10, x: 0.0
epoch 10, x: 0.0010485759999999981
epoch 10, x: 1.0737418240000007
epoch 10, x: 61.917364224000096
epoch 10, x: 1099.5116277760003
通過上述損失和圖像我們可以發現,學習率越小,其逼近最小值點越慢,在學習率爲0.5時正好落到了最小值點處(如果是局部最小值點,就出不來,如果最小值點,就是完美值了)。0.7時,可以發現第一次越過了最小值點,後續慢慢趨於最小值,而0.9和1.1點則使loss越來越多,原因是每次更新的值由學習率和梯度決定,0.7由於初始值*學習率比較大,所以跨度比較大,但是第二個點的梯度很小,所以慢慢收斂,而學習率過大則會導致第二點的梯度很大,下次更新是跳躍值更大,導致後續點梯度越來越大。例如學習率1.5時:loss=10240.0,圖像如下:
學習率
通過上述不同學習率圖像進行對比,我們可以發現一些規律:學習率越小,越容易逼近最小值,但是太小了有兩個問題,一是容易陷入局部最小值出不來,二是需要大量輪次訓練纔可以;學習率越大,而可能出現loss增大的現象。學會調整學習率來尋找最優解是至關重要的。
多維梯度下降
代碼如下:
import numpy as np
import tensorflow as tf
import math
from matplotlib import pyplot as plt
import sys
from IPython import display
#二維繪圖
def train_2d(trainer,eta):
x1, x2, s1, s2 = -5, -2, 0, 0 # s1和s2是自變量狀態,本章後續幾節會使用
results = [(x1, x2)]
for i in range(20):
x1, x2, s1, s2 = trainer(x1, x2, s1, s2,eta)
results.append((x1, x2))
print('epoch %d, x1 %f, x2 %f' % (i + 1, x1, x2))
return results
def show_trace_2d(f, results):
plt.plot(*zip(*results), '-o', color='#ff7f0e')
x1, x2 = np.meshgrid(np.arange(-5.5, 1.0, 0.1), np.arange(-3.0, 1.0, 0.1))
plt.contour(x1, x2, f(x1, x2), colors='#1f77b4')
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
def f_2d(x1, x2): # 目標函數
return x1 ** 2 + 2 * x2 ** 2
def gd_2d(x1, x2, s1, s2,eta):
return (x1 - eta * 2 * x1, x2 - eta * 4 * x2, 0, 0)
show_trace_2d(f_2d, train_2d(gd_2d,0.1))
show_trace_2d(f_2d, train_2d(gd_2d,0.3))
show_trace_2d(f_2d, train_2d(gd_2d,0.5))
show_trace_2d(f_2d, train_2d(gd_2d,0.7))
下面圖片依次爲學習率0.1、0.3、0.5、0.7的軌跡圖
通過上述可以發現,學習率爲0.1和0.3時,loss不斷減少,x1x2軸都在不斷收斂,0.5時收斂到0,完美值,而0.7而在x2上發生發散,結論與一維時相似,而此規律可以反映到多維上的。
隨機梯度下降
下面圖片依次爲學習率0.1、0.3、0.5、0.7的軌跡圖
通過觀察隨機梯度下降中自變量的迭代軌跡相對於梯度下降中的來說更爲曲折。這是由於實驗所添加的噪聲使模擬的隨機梯度的準確度下降。在實際中,這些噪聲通常指訓練數據集中的無意義的干擾。
小結:
- 使用適當的學習率,沿着梯度反方向更新自變量可能降低目標函數值。梯度下降重複這一更新過程直到得到滿足要求的解。
- 學習率過大或過小都有問題。一個合適的學習率通常是需要通過多次實驗找到的。
- 當訓練數據集的樣本較多時,梯度下降每次迭代的計算開銷較大,因而隨機梯度下降通常更受青睞。
小批量隨機梯度下降
本章裏我們將使用一個來自NASA的測試不同飛機機翼噪音的數據集來比較各個優化算法
從零開始實現:
import numpy as np
import time
import tensorflow as tf
from matplotlib import pyplot as plt
from IPython import display
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
#獲取前1500個數據庫
def get_data_ch7():
data = np.genfromtxt('/Users/ren/PycharmProjects/入組學習/動手學深度學習/data/airfoil_self_noise.dat', delimiter='\t')
#標準化
data = (data - data.mean(axis=0)) / data.std(axis=0)
return tf.convert_to_tensor(data[:1500, :-1],dtype=tf.float32), tf.convert_to_tensor(data[:1500, -1],dtype=tf.float32)
features,labels = get_data_ch7()
print(features.shape)
def sgd(params, states,hyperparams,grads):
for i,p in enumerate(params):
p.assign_sub(hyperparams['lr'] * grads[i])
#初始化一個線性迴歸模型
def linreg(X, w, b):
return tf.matmul(X, w) + b
def squared_loss(y_hat, y):
# 注意這裏返回的是向量, 另外, pytorch裏的MSELoss並沒有除以 2
return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2
def train_ch7(optimizer_fn, states, hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net, loss = linreg, squared_loss
w = tf.Variable(np.random.normal(0, 0.01, size=(features.shape[1], 1)), dtype=tf.float32)
b = tf.Variable(tf.zeros(1,dtype=tf.float32))
def eval_loss():
return np.array(tf.reduce_mean(loss(net(features, w, b), labels)))
ls = [eval_loss()]
data_iter = tf.data.Dataset.from_tensor_slices((features,labels)).batch(batch_size)
data_iter = data_iter.shuffle(100)
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
with tf.GradientTape() as tape:
l = tf.reduce_mean(loss(net(X, w, b), y)) # 使用平均損失
grads = tape.gradient(l, [w,b])
optimizer_fn([w, b], states, hyperparams,grads) # 迭代模型參數
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100個樣本記錄下當前訓練誤差
# 打印結果和作圖
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
set_figsize()
plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
def train_sgd(lr, batch_size, num_epochs=2):
train_ch7(sgd, None, {'lr': lr}, features, labels, batch_size, num_epochs)
#學習率爲1,梯度下降,6輪
train_sgd(1, 1500, 6)
#學習率爲0.005,隨機梯度下降,2輪
train_sgd(0.005, 1,6)
#學習率爲0.005,小批量隨機梯度下降,2輪
train_sgd(0.005, 10,6)
結果圖:
梯度下降的1個迭代週期對模型參數只迭代1次。可以看到6次迭代後目標函數值(訓練損失)的下降趨向了平穩
當批量大小爲1時,優化使用的是隨機梯度下降。未對學習率進行自我衰減,而是直接採用較小的常數學習率。隨機梯度下降中,每處理一個樣本會更新一次自變量(模型參數),一個迭代週期裏會對自變量進行1,500次更新。可以看到,目標函數值的下降過程比較顛簸,也如我們之前展示的那樣。
雖然隨機梯度下降和梯度下降在一個迭代週期裏都處理了1,500個樣本,但實驗中隨機梯度下降的一個迭代週期耗時更多。這是因爲隨機梯度下降在一個迭代週期裏做了更多次的自變量迭代,而且單樣本的梯度計算難以有效利用矢量計算。
當批量大小爲10時,優化使用的是小批量隨機梯度下降。它在每個迭代週期的耗時介於梯度下降和隨機梯度下降的耗時之間
簡潔實現:
from tensorflow.keras import optimizers
from matplotlib import pyplot as plt
from IPython import display
import tensorflow as tf
import numpy as np
import time
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
#獲取前1500個數據庫
def get_data_ch7():
data = np.genfromtxt('/Users/ren/PycharmProjects/入組學習/動手學深度學習/data/airfoil_self_noise.dat', delimiter='\t')
#標準化
data = (data - data.mean(axis=0)) / data.std(axis=0)
return tf.convert_to_tensor(data[:1500, :-1],dtype=tf.float32), tf.convert_to_tensor(data[:1500, -1],dtype=tf.float32)
features,labels = get_data_ch7()
trainer = optimizers.SGD(learning_rate=0.05)
def train_tensorflow2_ch7(trainer_name, trainer_hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net = tf.keras.Sequential()
net.add(tf.keras.layers.Dense(1))
loss = tf.losses.MeanSquaredError()
def eval_loss():
return np.array(tf.reduce_mean(loss(net(features), labels)))
ls = [eval_loss()]
data_iter = tf.data.Dataset.from_tensor_slices((features,labels)).batch(batch_size)
data_iter = data_iter.shuffle(100)
# 創建Trainer實例來迭代模型參數
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
with tf.GradientTape() as tape:
l = tf.reduce_mean(loss(net(X), y)) # 使用平均損失
grads = tape.gradient(l, net.trainable_variables)
trainer.apply_gradients(zip(grads, net.trainable_variables)) # 迭代模型參數
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100個樣本記錄下當前訓練誤差
# 打印結果和作圖
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
set_figsize()
plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
#使用
train_tensorflow2_ch7(trainer, {"lr": 0.05}, features, labels, 10)
小結:
- 小批量隨機梯度每次隨機均勻採樣一個小批量的訓練樣本來計算梯度。
- 在實際中,(小批量)隨機梯度下降的學習率可以在迭代過程中自我衰減。
- 通常,小批量隨機梯度在每個迭代週期的耗時介於梯度下降和隨機梯度下降的耗時之間。
動量法
梯度下降的問題
import numpy as np
import time
import sys
import tensorflow as tf
from matplotlib import pyplot as plt
eta = 0.4 # 學習率
def show_trace_2d(f, results):
plt.plot(*zip(*results), '-o', color='#ff7f0e')
x1, x2 = np.meshgrid(np.arange(-5.5, 1.0, 0.1), np.arange(-3.0, 1.0, 0.1))
plt.contour(x1, x2, f(x1, x2), colors='#1f77b4')
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
def train_2d(trainer):
x1, x2, s1, s2 = -5, -2, 0, 0 # s1和s2是自變量狀態,本章後續幾節會使用
results = [(x1, x2)]
for i in range(20):
x1, x2, s1, s2 = trainer(x1, x2, s1, s2)
results.append((x1, x2))
print('epoch %d, x1 %f, x2 %f' % (i + 1, x1, x2))
return results
def f_2d(x1, x2):
return 0.1 * x1 ** 2 + 2 * x2 ** 2
def gd_2d(x1, x2, s1, s2):
return (x1 - eta * 0.2 * x1, x2 - eta * 4 * x2, 0, 0)
show_trace_2d(f_2d, train_2d(gd_2d))
結果:
可以看到,同一位置上,目標函數在豎直方向(x2x2x2軸方向)比在水平方向(x1x1x1軸方向)的斜率的絕對值更大。因此,給定學習率,梯度下降迭代自變量時會使自變量在豎直方向比在水平方向移動幅度更大。那麼,我們需要一個較小的學習率從而避免自變量在豎直方向上越過目標函數最優解。然而,這會造成自變量在水平方向上朝最優解移動變慢。
試着將學習率調得稍大一點(學習率爲0.6),此時自變量在豎直方向不斷越過最優解並逐漸發散,如下圖:
代碼如下:
import numpy as np
import time
import sys
import tensorflow as tf
from matplotlib import pyplot as plt
eta = 0.6 # 學習率
def show_trace_2d(f, results):
plt.plot(*zip(*results), '-o', color='#ff7f0e')
x1, x2 = np.meshgrid(np.arange(-5.5, 1.0, 0.1), np.arange(-3.0, 1.0, 0.1))
plt.contour(x1, x2, f(x1, x2), colors='#1f77b4')
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
def train_2d(trainer):
x1, x2, s1, s2 = -5, -2, 0, 0 # s1和s2是自變量狀態,本章後續幾節會使用
results = [(x1, x2)]
for i in range(20):
x1, x2, s1, s2 = trainer(x1, x2, s1, s2)
results.append((x1, x2))
print('epoch %d, x1 %f, x2 %f' % (i + 1, x1, x2))
return results
def f_2d(x1, x2):
return 0.1 * x1 ** 2 + 2 * x2 ** 2
def gd_2d(x1, x2, s1, s2):
return (x1 - eta * 0.2 * x1, x2 - eta * 4 * x2, 0, 0)
def momentum_2d(x1, x2, v1, v2):
v1 = gamma * v1 + eta * 0.2 * x1
v2 = gamma * v2 + eta * 4 * x2
return x1 - v1, x2 - v2, v1, v2
eta, gamma = 0.4, 0.5
show_trace_2d(f_2d, train_2d(momentum_2d))
eta, gamma = 0.6, 0.5
show_trace_2d(f_2d, train_2d(momentum_2d))
結果如下:(學習率0.4、0.6和gamma=0.5)
可以看到使用較小的學習率η=0.4η=0.4η=0.4和動量超參數γ=0.5γ=0.5γ=0.5時,動量法在豎直方向上的移動更加平滑,且在水平方向上更快逼近最優解。下面使用較大的學習率η=0.6η=0.6η=0.6,此時自變量也不再發散。
指數加權移動平均(動量法原理)
說白了就是,把近期的影響考慮進來,不使用動量法時,x2軸上波動較大。考慮動力法時,從第二個數值開始,考慮第一次較大的波動,會讓第二次波動較小,第三次波動考慮第一次波動時影響遠不如第二次波動,第二次波動已經比較平穩,所以後續波動都會比較穩定下來。在x1軸由於方向相同,其實影響很小,但是使用較大學習率時,x1軸就會較快收斂,而x2軸還不會發散,就可以達到我們快速收斂的目的。
對於參數gamma,則可以看成對(1/1-gamma)加權平均(例如:0.95->20次,0.9->10次,0.5-〉2次)
手動實現:()
import numpy as np
import time
import sys
from IPython import display
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.keras import optimizers
eta = 0.6 # 學習率
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
def get_data_ch7():
data = np.genfromtxt('/Users/ren/PycharmProjects/入組學習/動手學深度學習/data/airfoil_self_noise.dat', delimiter='\t')
#標準化
data = (data - data.mean(axis=0)) / data.std(axis=0)
return tf.convert_to_tensor(data[:1500, :-1],dtype=tf.float32), tf.convert_to_tensor(data[:1500, -1],dtype=tf.float32)
def show_trace_2d(f, results):
plt.plot(*zip(*results), '-o', color='#ff7f0e')
x1, x2 = np.meshgrid(np.arange(-5.5, 1.0, 0.1), np.arange(-3.0, 1.0, 0.1))
plt.contour(x1, x2, f(x1, x2), colors='#1f77b4')
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
def train_2d(trainer):
x1, x2, s1, s2 = -5, -2, 0, 0 # s1和s2是自變量狀態,本章後續幾節會使用
results = [(x1, x2)]
for i in range(20):
x1, x2, s1, s2 = trainer(x1, x2, s1, s2)
results.append((x1, x2))
print('epoch %d, x1 %f, x2 %f' % (i + 1, x1, x2))
return results
def f_2d(x1, x2):
return 0.1 * x1 ** 2 + 2 * x2 ** 2
def gd_2d(x1, x2, s1, s2):
features, labels = get_data_ch7()
def init_momentum_states():
v_w = tf.zeros((features.shape[1], 1))
v_b = tf.zeros(1)
return (v_w, v_b)
def sgd_momentum(params, states, hyperparams, grads):
i = 0
for p, v in zip(params, states):
v = hyperparams['momentum'] * v + hyperparams['lr'] * grads[i]
p.assign_sub(v)
i += 1
def init_momentum_states():
v_w = tf.zeros((features.shape[1], 1))
v_b = tf.zeros(1)
return (v_w, v_b)
#初始化一個線性迴歸模型
def linreg(X, w, b):
return tf.matmul(X, w) + b
def squared_loss(y_hat, y):
# 注意這裏返回的是向量, 另外, pytorch裏的MSELoss並沒有除以 2
return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2
def train_ch7(optimizer_fn, states, hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net, loss = linreg, squared_loss
w = tf.Variable(np.random.normal(0, 0.01, size=(features.shape[1], 1)), dtype=tf.float32)
b = tf.Variable(tf.zeros(1, dtype=tf.float32))
def eval_loss():
return np.array(tf.reduce_mean(loss(net(features, w, b), labels)))
ls = [eval_loss()]
data_iter = tf.data.Dataset.from_tensor_slices((features, labels)).batch(batch_size)
data_iter = data_iter.shuffle(100)
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
with tf.GradientTape() as tape:
l = tf.reduce_mean(loss(net(X, w, b), y)) # 使用平均損失
grads = tape.gradient(l, [w, b])
optimizer_fn([w, b], states, hyperparams, grads) # 迭代模型參數
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100個樣本記錄下當前訓練誤差
# 打印結果和作圖
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
set_figsize()
plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
features,labels = get_data_ch7()
trainer = optimizers.SGD(learning_rate=0.05)
train_ch7(sgd_momentum, init_momentum_states(),
{'lr': 0.02, 'momentum': 0.5}, features, labels)
簡單實現(在Tensorflow中,只需要通過參數momentum
來指定動量超參數即可使用動量法。):
import numpy as np
import time
import sys
from IPython import display
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.keras import optimizers
trainer = optimizers.SGD(learning_rate=0.004,momentum=0.9)
eta = 0.6 # 學習率
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
#初始化一個線性迴歸模型
def linreg(X, w, b):
return tf.matmul(X, w) + b
def squared_loss(y_hat, y):
# 注意這裏返回的是向量, 另外, pytorch裏的MSELoss並沒有除以 2
return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2
def get_data_ch7():
data = np.genfromtxt('/Users/ren/PycharmProjects/入組學習/動手學深度學習/data/airfoil_self_noise.dat', delimiter='\t')
#標準化
data = (data - data.mean(axis=0)) / data.std(axis=0)
return tf.convert_to_tensor(data[:1500, :-1],dtype=tf.float32), tf.convert_to_tensor(data[:1500, -1],dtype=tf.float32)
def train_tensorflow2_ch7(trainer_name, trainer_hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net = tf.keras.Sequential()
net.add(tf.keras.layers.Dense(1))
loss = tf.losses.MeanSquaredError()
def eval_loss():
return np.array(tf.reduce_mean(loss(net(features), labels)))
ls = [eval_loss()]
data_iter = tf.data.Dataset.from_tensor_slices((features, labels)).batch(batch_size)
data_iter = data_iter.shuffle(100)
# 創建Trainer實例來迭代模型參數
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
with tf.GradientTape() as tape:
l = tf.reduce_mean(loss(net(X), y)) # 使用平均損失
grads = tape.gradient(l, net.trainable_variables)
trainer.apply_gradients(zip(grads, net.trainable_variables)) # 迭代模型參數
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100個樣本記錄下當前訓練誤差
# 打印結果和作圖
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
set_figsize()
plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
features,labels = get_data_ch7()
train_tensorflow2_ch7(trainer, {'lr': 0.004, 'momentum': 0.9},
features, labels)
小結:
- 動量法使用了指數加權移動平均的思想。它將過去時間步的梯度做了加權平均,且權重按時間步指數衰減。
- 動量法使得相鄰時間步的自變量更新在方向上更加一致。
AdaGrad算法
import numpy as np
import math
import sys
from matplotlib import pyplot as plt
def show_trace_2d(f, results):
plt.plot(*zip(*results), '-o', color='#ff7f0e')
x1, x2 = np.meshgrid(np.arange(-5.5, 1.0, 0.1), np.arange(-3.0, 1.0, 0.1))
plt.contour(x1, x2, f(x1, x2), colors='#1f77b4')
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
def train_2d(trainer):
x1, x2, s1, s2 = -5, -2, 0, 0 # s1和s2是自變量狀態,本章後續幾節會使用
results = [(x1, x2)]
for i in range(20):
x1, x2, s1, s2 = trainer(x1, x2, s1, s2)
results.append((x1, x2))
print('epoch %d, x1 %f, x2 %f' % (i + 1, x1, x2))
return results
def adagrad_2d(x1, x2, s1, s2):
g1, g2, eps = 0.2 * x1, 4 * x2, 1e-6 # 前兩項爲自變量梯度
s1 += g1 ** 2
s2 += g2 ** 2
x1 -= eta / math.sqrt(s1 + eps) * g1
x2 -= eta / math.sqrt(s2 + eps) * g2
return x1, x2, s1, s2
def f_2d(x1, x2):
return 0.1 * x1 ** 2 + 2 * x2 ** 2
eta = 0.4
show_trace_2d(f_2d, train_2d(adagrad_2d))
eta = 2
show_trace_2d(f_2d, train_2d(adagrad_2d))
結果:
說直白點就是學習率不斷下降,下降速度和當前梯度有關係,但是一直下降過程中,後期容易陷入局部最優解出不來的地步。
從零開始實現:()
import numpy as np
import math
import sys
from matplotlib import pyplot as plt
import tensorflow as tf
from IPython import display
import time
def get_data_ch7():
data = np.genfromtxt('/Users/ren/PycharmProjects/入組學習/動手學深度學習/data/airfoil_self_noise.dat', delimiter='\t')
#標準化
data = (data - data.mean(axis=0)) / data.std(axis=0)
return tf.convert_to_tensor(data[:1500, :-1],dtype=tf.float32), tf.convert_to_tensor(data[:1500, -1],dtype=tf.float32)
features, labels = get_data_ch7()
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
def init_adagrad_states():
s_w = tf.zeros((features.shape[1],1),dtype=tf.float32)
s_b = tf.zeros(1,dtype=tf.float32)
return (s_w, s_b)
def adagrad(params, states, hyperparams,grads):
eps = 1e-6
i=0
for p, s in zip(params, states):
s += (grads[i]**2)
p.assign_sub(hyperparams['lr']*grads[i]/tf.sqrt(s+eps))
i+=1
#初始化一個線性迴歸模型
def linreg(X, w, b):
return tf.matmul(X, w) + b
def squared_loss(y_hat, y):
# 注意這裏返回的是向量, 另外, pytorch裏的MSELoss並沒有除以 2
return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2
def train_ch7(optimizer_fn, states, hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net, loss = linreg, squared_loss
w = tf.Variable(np.random.normal(0, 0.01, size=(features.shape[1], 1)), dtype=tf.float32)
b = tf.Variable(tf.zeros(1, dtype=tf.float32))
def eval_loss():
return np.array(tf.reduce_mean(loss(net(features, w, b), labels)))
ls = [eval_loss()]
data_iter = tf.data.Dataset.from_tensor_slices((features, labels)).batch(batch_size)
data_iter = data_iter.shuffle(100)
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
with tf.GradientTape() as tape:
l = tf.reduce_mean(loss(net(X, w, b), y)) # 使用平均損失
grads = tape.gradient(l, [w, b])
optimizer_fn([w, b], states, hyperparams, grads) # 迭代模型參數
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100個樣本記錄下當前訓練誤差
# 打印結果和作圖
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
set_figsize()
plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
train_ch7(adagrad, init_adagrad_states(), {'lr': 0.1}, features, labels)
簡單實現(通過名稱爲Adagrad
的優化器方法,我們便可使用Tensorflow2提供的AdaGrad算法來訓練模型):
import numpy as np
import time
import sys
from IPython import display
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.keras import optimizers
trainer = optimizers.Adagrad(learning_rate=0.01)
eta = 0.6 # 學習率
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
#初始化一個線性迴歸模型
def linreg(X, w, b):
return tf.matmul(X, w) + b
def squared_loss(y_hat, y):
# 注意這裏返回的是向量, 另外, pytorch裏的MSELoss並沒有除以 2
return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2
def get_data_ch7():
data = np.genfromtxt('/Users/ren/PycharmProjects/入組學習/動手學深度學習/data/airfoil_self_noise.dat', delimiter='\t')
#標準化
data = (data - data.mean(axis=0)) / data.std(axis=0)
return tf.convert_to_tensor(data[:1500, :-1],dtype=tf.float32), tf.convert_to_tensor(data[:1500, -1],dtype=tf.float32)
def train_tensorflow2_ch7(trainer_name, trainer_hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net = tf.keras.Sequential()
net.add(tf.keras.layers.Dense(1))
loss = tf.losses.MeanSquaredError()
def eval_loss():
return np.array(tf.reduce_mean(loss(net(features), labels)))
ls = [eval_loss()]
data_iter = tf.data.Dataset.from_tensor_slices((features, labels)).batch(batch_size)
data_iter = data_iter.shuffle(100)
# 創建Trainer實例來迭代模型參數
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
with tf.GradientTape() as tape:
l = tf.reduce_mean(loss(net(X), y)) # 使用平均損失
grads = tape.gradient(l, net.trainable_variables)
trainer.apply_gradients(zip(grads, net.trainable_variables)) # 迭代模型參數
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100個樣本記錄下當前訓練誤差
# 打印結果和作圖
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
set_figsize()
plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
features,labels = get_data_ch7()
train_tensorflow2_ch7(trainer, {'lr': 0.004, 'momentum': 0.9},
features, labels)
小結:
- AdaGrad算法在迭代過程中不斷調整學習率,並讓目標函數自變量中每個元素都分別擁有自己的學習率。
- 使用AdaGrad算法時,自變量中每個元素的學習率在迭代過程中一直在降低(或不變)
RMSProp算法
說白了就是,AdaGrad算法更新學習率會越來越小,而RMSProp算法是AdaGrad算法和加權平均的結合,使得態度對當前的影響變低,可以更快的逼近最優解。
import numpy as np
import time
import math
import sys
import tensorflow as tf
from matplotlib import pyplot as plt
def train_2d(trainer): # 本函數將保存在d2lzh_tensorflow2包中方便以後使用
x1, x2, s1, s2 = -5, -2, 0, 0 # s1和s2是自變量狀態,本章後續幾節會使用
results = [(x1, x2)]
for i in range(20):
x1, x2, s1, s2 = trainer(x1, x2, s1, s2)
results.append((x1, x2))
print('epoch %d, x1 %f, x2 %f' % (i + 1, x1, x2))
return results
def show_trace_2d(f, results): # 本函數將保存在d2lzh_tensorflow2包中方便以後使用
plt.plot(*zip(*results), '-o', color='#ff7f0e')
x1, x2 = np.meshgrid(np.arange(-5.5, 1.0, 0.1), np.arange(-3.0, 1.0, 0.1))
plt.contour(x1, x2, f(x1, x2), colors='#1f77b4')
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
def rmsprop_2d(x1, x2, s1, s2):
g1, g2, eps = 0.2 * x1, 4 * x2, 1e-6
s1 = gamma * s1 + (1 - gamma) * g1 ** 2
s2 = gamma * s2 + (1 - gamma) * g2 ** 2
x1 -= eta / math.sqrt(s1 + eps) * g1
x2 -= eta / math.sqrt(s2 + eps) * g2
return x1, x2, s1, s2
def f_2d(x1, x2):
return 0.1 * x1 ** 2 + 2 * x2 ** 2
eta, gamma = 0.4, 0.9
show_trace_2d(f_2d, train_2d(rmsprop_2d))
手動實現:
import numpy as np
import math
import sys
from matplotlib import pyplot as plt
import tensorflow as tf
from IPython import display
import time
def get_data_ch7():
data = np.genfromtxt('/Users/ren/PycharmProjects/入組學習/動手學深度學習/data/airfoil_self_noise.dat', delimiter='\t')
#標準化
data = (data - data.mean(axis=0)) / data.std(axis=0)
return tf.convert_to_tensor(data[:1500, :-1],dtype=tf.float32), tf.convert_to_tensor(data[:1500, -1],dtype=tf.float32)
features, labels = get_data_ch7()
def init_rmsprop_states():
s_w = tf.zeros((features.shape[1],1),dtype=tf.float32)
s_b = tf.zeros(1,dtype=tf.float32)
return (s_w, s_b)
def rmsprop(params, states, hyperparams,grads):
gamma, eps, i = hyperparams['gamma'], 1e-6, 0
for p, s in zip(params, states):
s=gamma*s+(1-gamma)*(grads[i])**2
p.assign_sub(hyperparams['lr']*grads[i]/tf.sqrt(s+eps))
i+=1
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
#初始化一個線性迴歸模型
def linreg(X, w, b):
return tf.matmul(X, w) + b
def squared_loss(y_hat, y):
# 注意這裏返回的是向量, 另外, pytorch裏的MSELoss並沒有除以 2
return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2
def train_ch7(optimizer_fn, states, hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net, loss = linreg, squared_loss
w = tf.Variable(np.random.normal(0, 0.01, size=(features.shape[1], 1)), dtype=tf.float32)
b = tf.Variable(tf.zeros(1, dtype=tf.float32))
def eval_loss():
return np.array(tf.reduce_mean(loss(net(features, w, b), labels)))
ls = [eval_loss()]
data_iter = tf.data.Dataset.from_tensor_slices((features, labels)).batch(batch_size)
data_iter = data_iter.shuffle(100)
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
with tf.GradientTape() as tape:
l = tf.reduce_mean(loss(net(X, w, b), y)) # 使用平均損失
grads = tape.gradient(l, [w, b])
optimizer_fn([w, b], states, hyperparams, grads) # 迭代模型參數
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100個樣本記錄下當前訓練誤差
# 打印結果和作圖
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
set_figsize()
plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
train_ch7(rmsprop, init_rmsprop_states(), {'lr': 0.01, 'gamma': 0.9},
features, labels)
簡單實現(通過名稱爲RMSprop
的優化器方法,我們便可使用Tensorflow2中提供的RMSProp算法來訓練模型。注意,超參數γγγ通過alpha
指定。):
import numpy as np
import time
import sys
from IPython import display
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.keras import optimizers
trainer = optimizers.RMSprop(learning_rate=0.01,rho=0.9)
eta = 0.6 # 學習率
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
#初始化一個線性迴歸模型
def linreg(X, w, b):
return tf.matmul(X, w) + b
def squared_loss(y_hat, y):
# 注意這裏返回的是向量, 另外, pytorch裏的MSELoss並沒有除以 2
return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2
def get_data_ch7():
data = np.genfromtxt('/Users/ren/PycharmProjects/入組學習/動手學深度學習/data/airfoil_self_noise.dat', delimiter='\t')
#標準化
data = (data - data.mean(axis=0)) / data.std(axis=0)
return tf.convert_to_tensor(data[:1500, :-1],dtype=tf.float32), tf.convert_to_tensor(data[:1500, -1],dtype=tf.float32)
def train_tensorflow2_ch7(trainer_name, trainer_hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net = tf.keras.Sequential()
net.add(tf.keras.layers.Dense(1))
loss = tf.losses.MeanSquaredError()
def eval_loss():
return np.array(tf.reduce_mean(loss(net(features), labels)))
ls = [eval_loss()]
data_iter = tf.data.Dataset.from_tensor_slices((features, labels)).batch(batch_size)
data_iter = data_iter.shuffle(100)
# 創建Trainer實例來迭代模型參數
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
with tf.GradientTape() as tape:
l = tf.reduce_mean(loss(net(X), y)) # 使用平均損失
grads = tape.gradient(l, net.trainable_variables)
trainer.apply_gradients(zip(grads, net.trainable_variables)) # 迭代模型參數
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100個樣本記錄下當前訓練誤差
# 打印結果和作圖
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
set_figsize()
plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
features,labels = get_data_ch7()
train_tensorflow2_ch7(trainer, {'lr': 0.004, 'momentum': 0.9},
features, labels)
小結:
- RMSProp算法和AdaGrad算法的不同在於,RMSProp算法使用了小批量隨機梯度按元素平方的指數加權移動平均來調整學習率。
AdaDelta算法
從零開始實現:()
import numpy as np
import math
import sys
from matplotlib import pyplot as plt
import tensorflow as tf
from IPython import display
import time
def get_data_ch7():
data = np.genfromtxt('/Users/ren/PycharmProjects/入組學習/動手學深度學習/data/airfoil_self_noise.dat', delimiter='\t')
#標準化
data = (data - data.mean(axis=0)) / data.std(axis=0)
return tf.convert_to_tensor(data[:1500, :-1],dtype=tf.float32), tf.convert_to_tensor(data[:1500, -1],dtype=tf.float32)
features, labels = get_data_ch7()
def init_adadelta_states():
s_w, s_b = np.zeros((features.shape[1], 1), dtype=float), np.zeros(1, dtype=float)
delta_w, delta_b = np.zeros((features.shape[1], 1), dtype=float), np.zeros(1, dtype=float)
return ((s_w, delta_w), (s_b, delta_b))
def adadelta(params, states, hyperparams,grads):
rho, eps,i = hyperparams['rho'], 1e-5, 0
for p, (s, delta) in zip(params, states):
s[:] = rho * s + (1 - rho) * (grads[i]**2)
g = grads[i] * np.sqrt((delta + eps) / (s + eps))
p.assign_sub(g)
delta[:] = rho * delta + (1 - rho) * g * g
i+=1
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
#初始化一個線性迴歸模型
def linreg(X, w, b):
return tf.matmul(X, w) + b
def squared_loss(y_hat, y):
# 注意這裏返回的是向量, 另外, pytorch裏的MSELoss並沒有除以 2
return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2
def train_ch7(optimizer_fn, states, hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net, loss = linreg, squared_loss
w = tf.Variable(np.random.normal(0, 0.01, size=(features.shape[1], 1)), dtype=tf.float32)
b = tf.Variable(tf.zeros(1, dtype=tf.float32))
def eval_loss():
return np.array(tf.reduce_mean(loss(net(features, w, b), labels)))
ls = [eval_loss()]
data_iter = tf.data.Dataset.from_tensor_slices((features, labels)).batch(batch_size)
data_iter = data_iter.shuffle(100)
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
with tf.GradientTape() as tape:
l = tf.reduce_mean(loss(net(X, w, b), y)) # 使用平均損失
grads = tape.gradient(l, [w, b])
optimizer_fn([w, b], states, hyperparams, grads) # 迭代模型參數
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100個樣本記錄下當前訓練誤差
# 打印結果和作圖
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
set_figsize()
plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
train_ch7(adadelta, init_adadelta_states(), {'rho': 0.9}, features, labels)
簡單實現:(通過名稱爲Adadelta
的優化器方法,我們便可使用Tensorflow2提供的AdaDelta算法)
import numpy as np
import time
import sys
from IPython import display
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.keras import optimizers
trainer = optimizers.Adadelta(learning_rate=0.01,rho=0.9)
eta = 0.6 # 學習率
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
#初始化一個線性迴歸模型
def linreg(X, w, b):
return tf.matmul(X, w) + b
def squared_loss(y_hat, y):
# 注意這裏返回的是向量, 另外, pytorch裏的MSELoss並沒有除以 2
return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2
def get_data_ch7():
data = np.genfromtxt('/Users/ren/PycharmProjects/入組學習/動手學深度學習/data/airfoil_self_noise.dat', delimiter='\t')
#標準化
data = (data - data.mean(axis=0)) / data.std(axis=0)
return tf.convert_to_tensor(data[:1500, :-1],dtype=tf.float32), tf.convert_to_tensor(data[:1500, -1],dtype=tf.float32)
def train_tensorflow2_ch7(trainer_name, trainer_hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net = tf.keras.Sequential()
net.add(tf.keras.layers.Dense(1))
loss = tf.losses.MeanSquaredError()
def eval_loss():
return np.array(tf.reduce_mean(loss(net(features), labels)))
ls = [eval_loss()]
data_iter = tf.data.Dataset.from_tensor_slices((features, labels)).batch(batch_size)
data_iter = data_iter.shuffle(100)
# 創建Trainer實例來迭代模型參數
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
with tf.GradientTape() as tape:
l = tf.reduce_mean(loss(net(X), y)) # 使用平均損失
grads = tape.gradient(l, net.trainable_variables)
trainer.apply_gradients(zip(grads, net.trainable_variables)) # 迭代模型參數
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100個樣本記錄下當前訓練誤差
# 打印結果和作圖
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
set_figsize()
plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
features,labels = get_data_ch7()
train_tensorflow2_ch7(trainer, {'lr': 0.004, 'momentum': 0.9},
features, labels)
Adam算法
通過上述公式,其實就是RMSPro算法中的更新x時的梯度,變成了動量法中的v(考慮最近1/1-gamma次梯度)了。
從零開始實現:()
import numpy as np
import math
import sys
from matplotlib import pyplot as plt
import tensorflow as tf
from IPython import display
import time
def get_data_ch7():
data = np.genfromtxt('/Users/ren/PycharmProjects/入組學習/動手學深度學習/data/airfoil_self_noise.dat', delimiter='\t')
#標準化
data = (data - data.mean(axis=0)) / data.std(axis=0)
return tf.convert_to_tensor(data[:1500, :-1],dtype=tf.float32), tf.convert_to_tensor(data[:1500, -1],dtype=tf.float32)
features, labels = get_data_ch7()
def init_adam_states():
v_w, v_b = np.zeros((features.shape[1], 1), dtype=float), np.zeros(1, dtype=float)
s_w, s_b = np.zeros((features.shape[1], 1), dtype=float), np.zeros(1, dtype=float)
return ((v_w, s_w), (v_b, s_b))
def adam(params, states, hyperparams, grads):
beta1, beta2, eps, i = 0.9, 0.999, 1e-6, 0
for p, (v, s) in zip(params, states):
v[:] = beta1 * v + (1 - beta1) * grads[i]
s[:] = beta2 * s + (1 - beta2) * grads[i]**2
v_bias_corr = v / (1 - beta1 ** hyperparams['t'])
s_bias_corr = s / (1 - beta2 ** hyperparams['t'])
p.assign_sub(hyperparams['lr']*v_bias_corr/(np.sqrt(s_bias_corr) + eps))
i+=1
hyperparams['t'] += 1
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
#初始化一個線性迴歸模型
def linreg(X, w, b):
return tf.matmul(X, w) + b
def squared_loss(y_hat, y):
# 注意這裏返回的是向量, 另外, pytorch裏的MSELoss並沒有除以 2
return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2
def train_ch7(optimizer_fn, states, hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net, loss = linreg, squared_loss
w = tf.Variable(np.random.normal(0, 0.01, size=(features.shape[1], 1)), dtype=tf.float32)
b = tf.Variable(tf.zeros(1, dtype=tf.float32))
def eval_loss():
return np.array(tf.reduce_mean(loss(net(features, w, b), labels)))
ls = [eval_loss()]
data_iter = tf.data.Dataset.from_tensor_slices((features, labels)).batch(batch_size)
data_iter = data_iter.shuffle(100)
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
with tf.GradientTape() as tape:
l = tf.reduce_mean(loss(net(X, w, b), y)) # 使用平均損失
grads = tape.gradient(l, [w, b])
optimizer_fn([w, b], states, hyperparams, grads) # 迭代模型參數
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100個樣本記錄下當前訓練誤差
# 打印結果和作圖
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
set_figsize()
plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
train_ch7(adam, init_adam_states(), {'lr': 0.01, 't': 1}, features,labels)
簡單實現:(通過名稱爲“adam”的Trainer
實例,我們便可使用Tensorflow2提供的Adam算法)
import numpy as np
import time
import sys
from IPython import display
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.keras import optimizers
trainer = optimizers.Adam(learning_rate=0.01)
eta = 0.6 # 學習率
def use_svg_display():
"""Use svg format to display plot in jupyter"""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)):
use_svg_display()
# 設置圖的尺寸
plt.rcParams['figure.figsize'] = figsize
#初始化一個線性迴歸模型
def linreg(X, w, b):
return tf.matmul(X, w) + b
def squared_loss(y_hat, y):
# 注意這裏返回的是向量, 另外, pytorch裏的MSELoss並沒有除以 2
return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2
def get_data_ch7():
data = np.genfromtxt('/Users/ren/PycharmProjects/入組學習/動手學深度學習/data/airfoil_self_noise.dat', delimiter='\t')
#標準化
data = (data - data.mean(axis=0)) / data.std(axis=0)
return tf.convert_to_tensor(data[:1500, :-1],dtype=tf.float32), tf.convert_to_tensor(data[:1500, -1],dtype=tf.float32)
def train_tensorflow2_ch7(trainer_name, trainer_hyperparams, features, labels,
batch_size=10, num_epochs=2):
# 初始化模型
net = tf.keras.Sequential()
net.add(tf.keras.layers.Dense(1))
loss = tf.losses.MeanSquaredError()
def eval_loss():
return np.array(tf.reduce_mean(loss(net(features), labels)))
ls = [eval_loss()]
data_iter = tf.data.Dataset.from_tensor_slices((features, labels)).batch(batch_size)
data_iter = data_iter.shuffle(100)
# 創建Trainer實例來迭代模型參數
for _ in range(num_epochs):
start = time.time()
for batch_i, (X, y) in enumerate(data_iter):
with tf.GradientTape() as tape:
l = tf.reduce_mean(loss(net(X), y)) # 使用平均損失
grads = tape.gradient(l, net.trainable_variables)
trainer.apply_gradients(zip(grads, net.trainable_variables)) # 迭代模型參數
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100個樣本記錄下當前訓練誤差
# 打印結果和作圖
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
set_figsize()
plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
features,labels = get_data_ch7()
train_tensorflow2_ch7(trainer, {'lr': 0.004, 'momentum': 0.9},
features, labels)
小結:
- Adam算法在RMSProp算法的基礎上對小批量隨機梯度也做了指數加權移動平均。
- Adam算法使用了偏差修正。
總結:
這些優化算法可有好處,不一定越靠後的越好,選擇時應該先選擇自己擅長的優化算法,這樣懂原理更容易去調參數,如果都不理想,可以更好算法。