在PyTorch中爲可學習參數施加約束或正則項的方法

根據不同的需求,在PyTorch中有時需要爲模型的可學習參數施加自定義的約束或正則項(regular term),下面具體介紹在PyTorch中爲可學習參數施加約束或正則項的方法,先看一下爲損失函數(Loss function)施加正則項的具體形式,如下爲L2正則項:

Loss = L(w)+\lambda \sum _{i}w_{i}^{2}

在上式中,L(w)是訓練誤差關於可學習參數w的函數,右邊的第二項表示L2正則項。在PyTorch中L2正則項是默認內置實現的,其中的weight_decay就表示L2正則項的\lambda超參數。具體如下:

optimizer = optim.SGD(net.parameters(), lr=0.01, weight_decay=0.01)

根據不同的需求,怎樣自定義自己的正則項函數呢?具體示例如下:

import torch

torch.manual_seed(1)

N, D_in, H, D_out = 10, 5, 5, 1
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

model = torch.nn.Sequential(
    torch.nn.Linear(D_in, H),
    torch.nn.ReLU(),
    torch.nn.Linear(H, D_out),
)

criterion = torch.nn.MSELoss()
lr = 1e-4
weight_decay = 0  # for torch.optim.SGD
lmbd = 0.9  # for custom L2 regularization

optimizer = torch.optim.SGD(model.parameters(), lr=lr, weight_decay=weight_decay)

for t in range(100):
    y_pred = model(x)

    # Compute and print loss.
    loss = criterion(y_pred, y)

    optimizer.zero_grad()

    reg_loss = None
    for param in model.parameters():
        if reg_loss is None:
            reg_loss = 0.5 * torch.sum(param**2)
        else:
            reg_loss = reg_loss + 0.5 * param.norm(2)**2

    loss += lmbd * reg_loss

    loss.backward()

    optimizer.step()

for name, param in model.named_parameters():
    print(name, param)

在上述代碼中,如下部分可根據自己的需求,自定義自己的正則項約束:

reg_loss = None
    for param in model.parameters():
        if reg_loss is None:
            reg_loss = 0.5 * torch.sum(param**2)
        else:
            reg_loss = reg_loss + 0.5 * param.norm(2)**2

 

 

參考:

1. How does one implement Weight regularization (l1 or l2) manually without optimum?

2. torch.norm

3. How to add a L2 regularization term in my loss function?

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章