從零開始深度學習0616——pytorch入門之GAN+dynamic torch+GPU(cuda)+dropout+BN


#-------------------------------conditional GAN-----------------
###################################################################################################################################

 

參考百度百科

https://baike.baidu.com/item/Gan/22181905?fr=aladdin

 

GAN 基本簡介

模型通過框架中(至少)兩個模塊:生成模型(Generative Model)和判別模型(Discriminative Model)的互相博弈學習產生相當好的輸出。原始 GAN 理論中,並不要求 G D 都是神經網絡,只需要是能擬合相應生成和判別的函數即可。但實用中一般均使用深度神經網絡作爲 G D 。一個優秀的GAN應用需要有良好的訓練方法,否則可能由於神經網絡模型的自由性而導致輸出不理想。

 

機器學習的模型可大體分爲兩類,生成模型(Generative Model)和判別模型(Discriminative Model)。判別模型需要輸入變量 ,通過某種模型來預測 。生成模型是給定某種隱含信息,來隨機產生觀測數據。舉個簡單的例子,

  • 判別模型:給定一張圖,判斷這張圖裏的動物是貓還是狗
  • 生成模型:給一系列貓的圖片,生成一張新的貓咪(不在數據集裏)

對於判別模型,損失函數是容易定義的,因爲輸出的目標相對簡單。但對於生成模型,損失函數的定義就不是那麼容易。我們對於生成結果的期望,往往是一個曖昧不清,難以數學公理化定義的範式。所以不妨把生成模型的回饋部分,交給判別模型處理。這就是Goodfellow他將機器學習中的兩大類模型,GenerativeDiscrimitive給緊密地聯合在了一起  

GAN的基本原理其實非常簡單,這裏以生成圖片爲例進行說明。假設我們有兩個網絡,GGenerator)和DDiscriminator)。正如它的名字所暗示的那樣,它們的功能分別是:

  • G是一個生成圖片的網絡,它接收一個隨機的噪聲z,通過這個噪聲生成圖片,記做G(z)
  • D是一個判別網絡,判別一張圖片是不是真實的。它的輸入參數是xx代表一張圖片,輸出Dx)代表x爲真實圖片的概率,如果爲1,就代表100%是真實的圖片,而輸出爲0,就代表不可能是真實的圖片。

在訓練過程中,生成網絡G的目標就是儘量生成真實的圖片去欺騙判別網絡D。而D的目標就是儘量把G生成的圖片和真實的圖片分別開來。這樣,GD構成了一個動態的博弈過程

最後博弈的結果是什麼?在最理想的狀態下,G可以生成足以以假亂真的圖片G(z)。對於D來說,它難以判定G生成的圖片究竟是不是真實的,因此D(G(z)) = 0.5

這樣我們的目的就達成了:我們得到了一個生成式的模型G,它可以用來生成圖片。 

 

 

主要應用:

圖像生成,如超分辨率任務,語義分割  圖像翻譯,卡通人物生成,人體姿態生成,年齡變換,風格變換

圖像增強

 

…….

 

GAN 損失函數的理解

https://www.jianshu.com/p/588318e69eae

https://www.cnblogs.com/walter-xh/p/10051634.html

 

主要做到minGmaxD

相互博弈產生良性結果

 

 

完整代碼:

在上下區間中,迭代10000次,通過輸入隨機點,在上下藍紅曲線中,生成曲線

 

 


import torch

import torch.nn as nn

import numpy as np

import matplotlib.pyplot as plt



# torch.manual_seed(1)    # reproducible

# np.random.seed(1)



# Hyper Parameters

BATCH_SIZE = 64

LR_G = 0.0001           # learning rate for generator

LR_D = 0.0001           # learning rate for discriminator

N_IDEAS = 5             # think of this as number of ideas for generating an art work (Generator)

ART_COMPONENTS = 15     # it could be total point G can draw in the canvas

PAINT_POINTS = np.vstack([np.linspace(-1, 1, ART_COMPONENTS) for _ in range(BATCH_SIZE)])



# show our beautiful painting range

plt.plot(PAINT_POINTS[0], 2 * np.power(PAINT_POINTS[0], 2) + 1, c='#74BCFF', lw=3, label='upper bound')

plt.plot(PAINT_POINTS[0], 1 * np.power(PAINT_POINTS[0], 2) + 0, c='#FF9359', lw=3, label='lower bound')

plt.legend(loc='upper right')

plt.show()





def artist_works_with_labels():     # painting from the famous artist (real target)

    a = np.random.uniform(1, 2, size=BATCH_SIZE)[:, np.newaxis]

    paintings = a * np.power(PAINT_POINTS, 2) + (a-1)

    labels = (a-1) > 0.5            # upper paintings (1), lower paintings (0), two classes

    paintings = torch.from_numpy(paintings).float()

    labels = torch.from_numpy(labels.astype(np.float32))

    return paintings, labels





G = nn.Sequential(                      # Generator

    nn.Linear(N_IDEAS+1, 128),          # random ideas (could from normal distribution) + class label

    nn.ReLU(),

    nn.Linear(128, ART_COMPONENTS),     # making a painting from these random ideas

)



D = nn.Sequential(                      # Discriminator

    nn.Linear(ART_COMPONENTS+1, 128),   # receive art work either from the famous artist or a newbie like G with label

    nn.ReLU(),

    nn.Linear(128, 1),

    nn.Sigmoid(),                       # tell the probability that the art work is made by artist

)



opt_D = torch.optim.Adam(D.parameters(), lr=LR_D)

opt_G = torch.optim.Adam(G.parameters(), lr=LR_G)



plt.ion()   # something about continuous plotting



for step in range(10000):

    artist_paintings, labels = artist_works_with_labels()           # real painting, label from artist  真實的數據和標籤



    G_ideas = torch.randn(BATCH_SIZE, N_IDEAS)                      # random ideas 隨機在縱座標生成BATCH_SIZE個數據   在橫座標生成N_IDEAS個數據

    G_inputs = torch.cat((G_ideas, labels), 1)                      # ideas with labels   將生成的數據和真實標籤按行拼接

    G_paintings = G(G_inputs)                                       # fake painting w.r.t label from G  將假的拼接好的數據 輸入到生成器



    D_inputs0 = torch.cat((artist_paintings, labels), 1)            # all have their labels   真實的數據和標籤

    D_inputs1 = torch.cat((G_paintings, labels), 1)                 # 生成的數據和標籤

    prob_artist0 = D(D_inputs0)                 # D try to increase  this prob  真實數據和標籤輸入到判別器

    prob_artist1 = D(D_inputs1)                 # D try to reduce this prob     生成的數據和標籤輸入到判別器



    D_score0 = torch.log(prob_artist0)          # maximise this for D  真實的數據的判別器的得分

    D_score1 = torch.log(1. - prob_artist1)     # maximise this for D  生成的數據的判別器的得分

    D_loss = - torch.mean(D_score0 + D_score1)  # minimise the negative of both two above for D  定義三重態損失函數 爲了使損失函數小,就迫使prob_artist0 越大,prob_artist1越小

    G_loss = torch.mean(D_score1)               # minimise D score w.r.t G  生成器損失函數  只對生成的數據



    opt_D.zero_grad()

    D_loss.backward(retain_graph=True)      # reusing computational graph  保留參數  留給下次反向傳播

    opt_D.step()



    opt_G.zero_grad()

    G_loss.backward()

    opt_G.step()



    if step % 200 == 0:  # plotting

        plt.cla()

        plt.plot(PAINT_POINTS[0], G_paintings.data.numpy()[0], c='#4AD631', lw=3, label='Generated painting',)

        bound = [0, 0.5] if labels.data[0, 0] == 0 else [0.5, 1]

        plt.plot(PAINT_POINTS[0], 2 * np.power(PAINT_POINTS[0], 2) + bound[1], c='#74BCFF', lw=3, label='upper bound')

        plt.plot(PAINT_POINTS[0], 1 * np.power(PAINT_POINTS[0], 2) + bound[0], c='#FF9359', lw=3, label='lower bound')

        plt.text(-.5, 2.3, 'D accuracy=%.2f (0.5 for D to converge)' % prob_artist0.data.numpy().mean(), fontdict={'size': 13})

        plt.text(-.5, 2, 'D score= %.2f (-1.38 for G to converge)' % -D_loss.data.numpy(), fontdict={'size': 13})

        plt.text(-.5, 1.7, 'Class = %i' % int(labels.data[0, 0]), fontdict={'size': 13})

        plt.ylim((0, 3));plt.legend(loc='upper right', fontsize=10);plt.draw();plt.pause(0.1)



plt.ioff()

plt.show()



# plot a generated painting for upper class

z = torch.randn(1, N_IDEAS)

label = torch.FloatTensor([[1.]])     # for upper class

G_inputs = torch.cat((z, label), 1)

G_paintings = G(G_inputs)

plt.plot(PAINT_POINTS[0], G_paintings.data.numpy()[0], c='#4AD631', lw=3, label='G painting for upper class',)

plt.plot(PAINT_POINTS[0], 2 * np.power(PAINT_POINTS[0], 2) + bound[1], c='#74BCFF', lw=3, label='upper bound (class 1)')

plt.plot(PAINT_POINTS[0], 1 * np.power(PAINT_POINTS[0], 2) + bound[0], c='#FF9359', lw=3, label='lower bound (class 1)')

plt.ylim((0, 3));plt.legend(loc='upper right', fontsize=10);plt.show()

 

 

運行結果:

 

 

 

--------------------------------------爲什麼torch是動態的--------------------------------------------------

##############################################################################################################################################

 

 

tensorflow 不同  

Tensorflow 是靜態的編輯器,先建立好一個靜態的圖表,然後再把數據放到圖表中進行計算,然後再反向傳遞更新這些參數

 

因爲在某些實際情況可能batch_size  time_step 是不確定的  

但是又不能兩個同時不確定  這時程序會報錯

 

 

例子使用RNNtime_step 隨機來實現

 

 

完整代碼:

 


import torch

from torch import nn

import numpy as np

import matplotlib.pyplot as plt



# torch.manual_seed(1)    # reproducible



# Hyper Parameters

INPUT_SIZE = 1          # rnn input size / image width

LR = 0.02               # learning rate





class RNN(nn.Module):

    def __init__(self):

        super(RNN, self).__init__()



        self.rnn = nn.RNN(

            input_size=1,

            hidden_size=32,     # rnn hidden unit

            num_layers=1,       # number of rnn layer

            batch_first=True,   # input & output will has batch size as 1s dimension. e.g. (batch, time_step, input_size)

        )

        self.out = nn.Linear(32, 1)



    def forward(self, x, h_state):

        # x (batch, time_step, input_size)

        # h_state (n_layers, batch, hidden_size)

        # r_out (batch, time_step, output_size)

        r_out, h_state = self.rnn(x, h_state)



        outs = []                                   # this is where you can find torch is dynamic

        for time_step in range(r_out.size(1)):      # calculate output for each time step

            outs.append(self.out(r_out[:, time_step, :]))

        return torch.stack(outs, dim=1), h_state





rnn = RNN()

print(rnn)



optimizer = torch.optim.Adam(rnn.parameters(), lr=LR)   # optimize all cnn parameters

loss_func = nn.MSELoss()                                # the target label is not one-hotted



h_state = None   # for initial hidden state



plt.figure(1, figsize=(12, 5))

plt.ion()   # continuously plot



########################  Below is different #########################



################ static time steps ##########

# for step in range(60):

#     start, end = step * np.pi, (step+1)*np.pi   # time steps

#     # use sin predicts cos

#     steps = np.linspace(start, end, 10, dtype=np.float32)



################ dynamic time steps #########

step = 0

for i in range(60):

    dynamic_steps = np.random.randint(1, 4)  # has random time steps  隨機步長 動態化 使每次輸入的步長都是不一樣的

    start, end = step * np.pi, (step + dynamic_steps) * np.pi  # different time steps length

    step += dynamic_steps



    # use sin predicts cos

    steps = np.linspace(start, end, 10 * dynamic_steps, dtype=np.float32)



#######################  Above is different ###########################



    print(len(steps))       # print how many time step feed to RNN



    x_np = np.sin(steps)    # float32 for converting torch FloatTensor

    y_np = np.cos(steps)



    x = torch.from_numpy(x_np[np.newaxis, :, np.newaxis])    # shape (batch, time_step, input_size)

    y = torch.from_numpy(y_np[np.newaxis, :, np.newaxis])



    prediction, h_state = rnn(x, h_state)   # rnn output

    # !! next step is important !!

    h_state = h_state.data        # repack the hidden state, break the connection from last iteration



    loss = loss_func(prediction, y)         # cross entropy loss

    optimizer.zero_grad()                   # clear gradients for this training step

    loss.backward()                         # backpropagation, compute gradients

    optimizer.step()                        # apply gradients



    # plotting

    plt.plot(steps, y_np.flatten(), 'r-')

    plt.plot(steps, prediction.data.numpy().flatten(), 'b-')

    plt.draw()

    plt.pause(0.05)



plt.ioff()

plt.show()

 

 

運行結果:

 

 

 

 

---------------------------------------------GPU-----------------------------------------------------------------

###########################################################################################################################################

 

套用之前CNN實現數字識別的代碼

簡單加上幾個方法

對訓練數據 網絡模型等  加上.cuda()  的方法  

 


import torch

import torch.nn as nn

import torch.utils.data as Data

import torchvision



# torch.manual_seed(1)



EPOCH = 1

BATCH_SIZE = 50

LR = 0.001

DOWNLOAD_MNIST = False



train_data = torchvision.datasets.MNIST(root='./mnist/', train=True, transform=torchvision.transforms.ToTensor(), download=DOWNLOAD_MNIST,)

train_loader = Data.DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)



test_data = torchvision.datasets.MNIST(root='./mnist/', train=False)



# !!!!!!!! Change in here !!!!!!!!! #

test_x = torch.unsqueeze(test_data.test_data, dim=1).type(torch.FloatTensor)[:2000].cuda()/255.   # Tensor on GPU

test_y = test_data.test_labels[:2000].cuda()





class CNN(nn.Module):

    def __init__(self):

        super(CNN, self).__init__()

        self.conv1 = nn.Sequential(nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2,),

                                   nn.ReLU(), nn.MaxPool2d(kernel_size=2),)

        self.conv2 = nn.Sequential(nn.Conv2d(16, 32, 5, 1, 2), nn.ReLU(), nn.MaxPool2d(2),)

        self.out = nn.Linear(32 * 7 * 7, 10)



    def forward(self, x):

        x = self.conv1(x)

        x = self.conv2(x)

        x = x.view(x.size(0), -1)

        output = self.out(x)

        return output



cnn = CNN()



# !!!!!!!! Change in here !!!!!!!!! #

cnn.cuda()      # Moves all model parameters and buffers to the GPU.



optimizer = torch.optim.Adam(cnn.parameters(), lr=LR)

loss_func = nn.CrossEntropyLoss()



for epoch in range(EPOCH):

    for step, (x, y) in enumerate(train_loader):



        # !!!!!!!! Change in here !!!!!!!!! #

        b_x = x.cuda()    # Tensor on GPU

        b_y = y.cuda()    # Tensor on GPU



        output = cnn(b_x)

        loss = loss_func(output, b_y)

        optimizer.zero_grad()

        loss.backward()

        optimizer.step()



        if step % 50 == 0:

            test_output = cnn(test_x)



            # !!!!!!!! Change in here !!!!!!!!! #

            pred_y = torch.max(test_output, 1)[1].cuda().data  # move the computation in GPU



            accuracy = torch.sum(pred_y == test_y).type(torch.FloatTensor) / test_y.size(0)

            print('Epoch: ', epoch, '| train loss: %.4f' % loss.data.cpu().numpy(), '| test accuracy: %.2f' % accuracy)





test_output = cnn(test_x[:10])



# !!!!!!!! Change in here !!!!!!!!! #

pred_y = torch.max(test_output, 1)[1].cuda().data # move the computation in GPU



print(pred_y, 'prediction number')

print(test_y[:10], 'real number')

 

 

運行結果

在一定輪數的訓練之後  輸入預測值和真實值的比較

GPU加速後 肉眼可見的訓練速度加快

 

 

 

 

-------------------------------------------dropout-----------------------------------------------------------------

#############################################################################################################################################

 

torch.nn.Dropout(0.5),

每次隨機抽取一定百分比的神經元 去進行傳播

用訓練次數來彌補可能丟失的信息

 

 

訓練時需要dropout 但是測試時不需要

 

訓練時需要dropout 但是測試時不需要。

所以在測試之前需要把dropout屏蔽掉,如圖

 

 

完整代碼:

 


import torch

import matplotlib.pyplot as plt



# torch.manual_seed(1)    # reproducible



N_SAMPLES = 20

N_HIDDEN = 300



# training data

x = torch.unsqueeze(torch.linspace(-1, 1, N_SAMPLES), 1)

y = x + 0.3*torch.normal(torch.zeros(N_SAMPLES, 1), torch.ones(N_SAMPLES, 1))



# test data

test_x = torch.unsqueeze(torch.linspace(-1, 1, N_SAMPLES), 1)

test_y = test_x + 0.3*torch.normal(torch.zeros(N_SAMPLES, 1), torch.ones(N_SAMPLES, 1))



# show data

plt.scatter(x.data.numpy(), y.data.numpy(), c='magenta', s=50, alpha=0.5, label='train')

plt.scatter(test_x.data.numpy(), test_y.data.numpy(), c='cyan', s=50, alpha=0.5, label='test')

plt.legend(loc='upper left')

plt.ylim((-2.5, 2.5))

plt.show()



net_overfitting = torch.nn.Sequential(

    torch.nn.Linear(1, N_HIDDEN),

    torch.nn.ReLU(),

    torch.nn.Linear(N_HIDDEN, N_HIDDEN),

    torch.nn.ReLU(),

    torch.nn.Linear(N_HIDDEN, 1),

)



net_dropped = torch.nn.Sequential(

    torch.nn.Linear(1, N_HIDDEN),

    torch.nn.Dropout(0.5),  # drop 50% of the neuron

    torch.nn.ReLU(),

    torch.nn.Linear(N_HIDDEN, N_HIDDEN),

    torch.nn.Dropout(0.5),  # drop 50% of the neuron

    torch.nn.ReLU(),

    torch.nn.Linear(N_HIDDEN, 1),

)



print(net_overfitting)  # net architecture

print(net_dropped)



optimizer_ofit = torch.optim.Adam(net_overfitting.parameters(), lr=0.01)

optimizer_drop = torch.optim.Adam(net_dropped.parameters(), lr=0.01)

loss_func = torch.nn.MSELoss()



plt.ion()   # something about plotting



for t in range(500):

    pred_ofit = net_overfitting(x)

    pred_drop = net_dropped(x)

    loss_ofit = loss_func(pred_ofit, y)

    loss_drop = loss_func(pred_drop, y)



    optimizer_ofit.zero_grad()

    optimizer_drop.zero_grad()

    loss_ofit.backward()

    loss_drop.backward()

    optimizer_ofit.step()

    optimizer_drop.step()



    if t % 10 == 0:

        # change to eval mode in order to fix drop out effect

        net_overfitting.eval()

        net_dropped.eval()  # parameters for dropout differ from train mode



        # plotting

        plt.cla()

        test_pred_ofit = net_overfitting(test_x)

        test_pred_drop = net_dropped(test_x)

        plt.scatter(x.data.numpy(), y.data.numpy(), c='magenta', s=50, alpha=0.3, label='train')

        plt.scatter(test_x.data.numpy(), test_y.data.numpy(), c='cyan', s=50, alpha=0.3, label='test')

        plt.plot(test_x.data.numpy(), test_pred_ofit.data.numpy(), 'r-', lw=3, label='overfitting')

        plt.plot(test_x.data.numpy(), test_pred_drop.data.numpy(), 'b--', lw=3, label='dropout(50%)')

        plt.text(0, -1.2, 'overfitting loss=%.4f' % loss_func(test_pred_ofit, test_y).data.numpy(), fontdict={'size': 20, 'color':  'red'})

        plt.text(0, -1.5, 'dropout loss=%.4f' % loss_func(test_pred_drop, test_y).data.numpy(), fontdict={'size': 20, 'color': 'blue'})

        plt.legend(loc='upper left'); plt.ylim((-2.5, 2.5));plt.pause(0.1)



        # change back to train mode

        net_overfitting.train()

        net_dropped.train()



plt.ioff()

plt.show()

 

 

運行結果:

 

 

 

----------------------------------bach normalization-----------------------------------------------------

#############################################################################################################################################

 

 

 

每一個BN  都被添加在全連接層和激活函數之間

是處理數據的一種方式  

對全連接層後輸出的數據進行處理,避免數據分佈在激活函數的無效區間,隨着訓練 神經網絡會死掉

比如tanh激活函數

經過BN層處理數據以後,用BN拉回到(0,1  這樣就可以避免梯度爆炸和梯度消失

 

 

 

完整代碼:

爲了畫圖  代碼不是很好理解

 


import torch

from torch import nn

from torch.nn import init

import torch.utils.data as Data

import matplotlib.pyplot as plt

import numpy as np



# torch.manual_seed(1)    # reproducible

# np.random.seed(1)



# Hyper parameters

N_SAMPLES = 2000

BATCH_SIZE = 64

EPOCH = 12

LR = 0.03

N_HIDDEN = 8

ACTIVATION = torch.tanh

B_INIT = -0.2   # use a bad bias constant initializer



# training data

x = np.linspace(-7, 10, N_SAMPLES)[:, np.newaxis]

noise = np.random.normal(0, 2, x.shape)

y = np.square(x) - 5 + noise



# test data

test_x = np.linspace(-7, 10, 200)[:, np.newaxis]

noise = np.random.normal(0, 2, test_x.shape)

test_y = np.square(test_x) - 5 + noise



train_x, train_y = torch.from_numpy(x).float(), torch.from_numpy(y).float()

test_x = torch.from_numpy(test_x).float()

test_y = torch.from_numpy(test_y).float()



train_dataset = Data.TensorDataset(train_x, train_y)

train_loader = Data.DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=2,)



# show data

plt.scatter(train_x.numpy(), train_y.numpy(), c='#FF9359', s=50, alpha=0.2, label='train')

plt.legend(loc='upper left')





class Net(nn.Module):

    def __init__(self, batch_normalization=False):

        super(Net, self).__init__()

        self.do_bn = batch_normalization

        self.fcs = []

        self.bns = []

        self.bn_input = nn.BatchNorm1d(1, momentum=0.5)   # for input data



        for i in range(N_HIDDEN):               # build hidden layers and BN layers

            input_size = 1 if i == 0 else 10

            fc = nn.Linear(input_size, 10)

            setattr(self, 'fc%i' % i, fc)       # IMPORTANT set layer to the Module

            self._set_init(fc)                  # parameters initialization

            self.fcs.append(fc)

            if self.do_bn:

                bn = nn.BatchNorm1d(10, momentum=0.5)

                setattr(self, 'bn%i' % i, bn)   # IMPORTANT set layer to the Module

                self.bns.append(bn)



        self.predict = nn.Linear(10, 1)         # output layer

        self._set_init(self.predict)            # parameters initialization



    def _set_init(self, layer):

        init.normal_(layer.weight, mean=0., std=.1)

        init.constant_(layer.bias, B_INIT)



    def forward(self, x):

        pre_activation = [x]

        if self.do_bn: x = self.bn_input(x)     # input batch normalization

        layer_input = [x]

        for i in range(N_HIDDEN):

            x = self.fcs[i](x)

            pre_activation.append(x)

            if self.do_bn: x = self.bns[i](x)   # batch normalization

            x = ACTIVATION(x)

            layer_input.append(x)

        out = self.predict(x)

        return out, layer_input, pre_activation



nets = [Net(batch_normalization=False), Net(batch_normalization=True)]



# print(*nets)    # print net architecture



opts = [torch.optim.Adam(net.parameters(), lr=LR) for net in nets]



loss_func = torch.nn.MSELoss()





def plot_histogram(l_in, l_in_bn, pre_ac, pre_ac_bn):

    for i, (ax_pa, ax_pa_bn, ax, ax_bn) in enumerate(zip(axs[0, :], axs[1, :], axs[2, :], axs[3, :])):

        [a.clear() for a in [ax_pa, ax_pa_bn, ax, ax_bn]]

        if i == 0:

            p_range = (-7, 10);the_range = (-7, 10)

        else:

            p_range = (-4, 4);the_range = (-1, 1)

        ax_pa.set_title('L' + str(i))

        ax_pa.hist(pre_ac[i].data.numpy().ravel(), bins=10, range=p_range, color='#FF9359', alpha=0.5);ax_pa_bn.hist(pre_ac_bn[i].data.numpy().ravel(), bins=10, range=p_range, color='#74BCFF', alpha=0.5)

        ax.hist(l_in[i].data.numpy().ravel(), bins=10, range=the_range, color='#FF9359');ax_bn.hist(l_in_bn[i].data.numpy().ravel(), bins=10, range=the_range, color='#74BCFF')

        for a in [ax_pa, ax, ax_pa_bn, ax_bn]: a.set_yticks(());a.set_xticks(())

        ax_pa_bn.set_xticks(p_range);ax_bn.set_xticks(the_range)

        axs[0, 0].set_ylabel('PreAct');axs[1, 0].set_ylabel('BN PreAct');axs[2, 0].set_ylabel('Act');axs[3, 0].set_ylabel('BN Act')

    plt.pause(0.01)





if __name__ == "__main__":

    f, axs = plt.subplots(4, N_HIDDEN + 1, figsize=(10, 5))

    plt.ion()  # something about plotting

    plt.show()



    # training

    losses = [[], []]  # recode loss for two networks



    for epoch in range(EPOCH):

        print('Epoch: ', epoch)

        layer_inputs, pre_acts = [], []

        for net, l in zip(nets, losses):

            net.eval()              # set eval mode to fix moving_mean and moving_var

            pred, layer_input, pre_act = net(test_x)

            l.append(loss_func(pred, test_y).data.item())

            layer_inputs.append(layer_input)

            pre_acts.append(pre_act)

            net.train()             # free moving_mean and moving_var

        plot_histogram(*layer_inputs, *pre_acts)     # plot histogram



        for step, (b_x, b_y) in enumerate(train_loader):

            for net, opt in zip(nets, opts):     # train for each network

                pred, _, _ = net(b_x)

                loss = loss_func(pred, b_y)

                opt.zero_grad()

                loss.backward()

                opt.step()    # it will also learns the parameters in Batch Normalization



    plt.ioff()



    # plot training loss

    plt.figure(2)

    plt.plot(losses[0], c='#FF9359', lw=3, label='Original')

    plt.plot(losses[1], c='#74BCFF', lw=3, label='Batch Normalization')

    plt.xlabel('step');plt.ylabel('test loss');plt.ylim((0, 2000));plt.legend(loc='best')



    # evaluation

    # set net to eval mode to freeze the parameters in batch normalization layers

    [net.eval() for net in nets]    # set eval mode to fix moving_mean and moving_var

    preds = [net(test_x)[0] for net in nets]

    plt.figure(3)

    plt.plot(test_x.data.numpy(), preds[0].data.numpy(), c='#FF9359', lw=4, label='Original')

    plt.plot(test_x.data.numpy(), preds[1].data.numpy(), c='#74BCFF', lw=4, label='Batch Normalization')

    plt.scatter(test_x.data.numpy(), test_y.data.numpy(), c='r', s=50, alpha=0.2, label='train')

    plt.legend(loc='best')

    plt.show()

 

 

運行結果:

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章