調用方法:
torch.optim.SGD(params, lr=<required parameter>, momentum=0, dampening=0, weight_decay=0, nesterov=False)
momentum: 動量參數
dampen ing:梯度抑制參數
weight_cay:L2的參數
nesterov:是否使用neterov動量
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
weight_decay = group['weight_decay'] # 權重衰減係數
momentum = group['momentum'] # 動量因子,0.9或0.8
dampening = group['dampening'] # 梯度抑制因子
nesterov = group['nesterov'] # 是否使用nesterov動量
for p in group['params']:
if p.grad is None:
continue
d_p = p.grad.data
if weight_decay != 0: # 進行正則化
# add_表示原處改變,d_p = d_p + weight_decay*p.data
d_p.add_(weight_decay, p.data)
if momentum != 0:
param_state = self.state[p] # 之前的累計的數據,v(t-1)
# 進行動量累計計算
if 'momentum_buffer' not in param_state:
buf = param_state['momentum_buffer'] = torch.clone(d_p).detach()
else:
# 之前的動量
buf = param_state['momentum_buffer']
# buf= buf*momentum + (1-dampening)*d_p
buf.mul_(momentum).add_(1 - dampening, d_p)
if nesterov: # 使用neterov動量
# d_p= d_p + momentum*buf
d_p = d_p.add(momentum, buf)
else:
d_p = buf
# p = p - lr*d_p
p.data.add_(-group['lr'], d_p)
return loss
Pytorch中的SGD採用下述方式
也即
代碼中,
d_p = d_p + weight_decayp.data # 權重衰減,這裏實際上是做的 L2正則
buf = bufmomentum + (1-dampening)d_p # 計算動量,即v
若採用nesterov動量
d_p= d_p + momentumbuf
否則 d_p = buf
最後更新
p = p - lr*d_p
說明一下,L2正則的地方
求完導數後
上式右邊的前半部分在loss.backward()已經求出,後半部分既是代碼中進行正則化的部分,計算完畢再進行參數更新。weight_decay即是.
普通不帶動量的SGD中權重衰減和L2正則是等價的。
在這裏,還需要注意,採用d_p = p.grad.data修改時,會直接修改p.grad.data的數據