光流 | flownet | CVPR2015 | 論文+pytorch代碼

  • 文章轉自微信公衆號「機器學習煉丹術」
  • 作者:煉丹兄(已授權)
  • 作者聯繫方式:微信cyx645016617(歡迎交流 共同進步)
  • 論文名稱:“FlowNet: Learning Optical Flow with Convolutional Networks”
  • 論文鏈接:http://xxx.itp.ac.cn/abs/1504.06852

image.png

0 綜述

論文的主要貢獻在我看來有兩個:

  • 提出了flownet結構,也就是flownet-v1(現在已經更新到flownet-v2版本),flownet-v1中包含兩個版本,一個是flownet-v1S(simple),另一個是flownet-v1C(correlation)。
  • 提出了著名的Flying chairs數據集,飛翔的椅子哈哈,做光流的應該都知道這個有趣的數據集。

1 flownetsimple

1.1 特徵提取模塊

已知卷積神經網絡在具有足夠的標記數據的情況下非常擅長學習輸入輸出關係。因此,採用端到端的學習方法來預測光流:

給定一個由圖像對和光流組成的數據集,我們訓練網絡以直接從圖像中預測x-y的光流場。但是,爲此目的,好的架構是什麼?

一個簡單的選擇是將兩個輸入圖像堆疊在一起,並通過一個相當通用的網絡將其輸入,從而使網絡可以自行決定如何處理圖像對以提取運動信息。這種僅包含卷積層的架構爲“FlowNetSimple”:

image.png

煉丹兄簡單講網絡結構:

  • 輸入圖片是三通道的,把兩張圖片concat起來,變成6通道的圖片;
  • 之後就是常規的:卷積卷積卷積,中間會混雜stride=2的下采樣;
  • 然後有幾個卷積層輸出的特徵圖會直連到refinement部分(圖中右邊綠色沙漏型圖塊),這些特徵融合,讓圖片恢復較大的尺寸;
  • 原始模型的輸入圖片是384x512,但是最後輸出的光流場大小爲136x320,這一點需要注意一下。

1.2 refinement

我們來看refinement部分,其實這個部分跟Unet也有些類似,但是又有獨特的光流模型的特性。

image.png

  • 可以看到,基本上每一塊的特徵圖,都包含三個部分:
    • 從前一個小尺寸的特徵圖deconv得到的特徵;
    • 從前一個小尺寸的特徵圖轉換成小尺寸的光流場然後deconv得到的特徵;
    • 在特徵提取過程中,與之尺寸相匹配的特徵;
  • 上面的三個特徵concat之後,就會變成下一個尺寸的輸入特徵塊,不斷循環,讓特徵的尺寸不斷放大;

1.3 pytorch

lass FlowNetS(nn.Module):
    expansion = 1

    def __init__(self,batchNorm=True):
        super(FlowNetS,self).__init__()

        self.batchNorm = batchNorm
        self.conv1   = conv(self.batchNorm,   6,   64, kernel_size=7, stride=2)
        self.conv2   = conv(self.batchNorm,  64,  128, kernel_size=5, stride=2)
        self.conv3   = conv(self.batchNorm, 128,  256, kernel_size=5, stride=2)
        self.conv3_1 = conv(self.batchNorm, 256,  256)
        self.conv4   = conv(self.batchNorm, 256,  512, stride=2)
        self.conv4_1 = conv(self.batchNorm, 512,  512)
        self.conv5   = conv(self.batchNorm, 512,  512, stride=2)
        self.conv5_1 = conv(self.batchNorm, 512,  512)
        self.conv6   = conv(self.batchNorm, 512, 1024, stride=2)
        self.conv6_1 = conv(self.batchNorm,1024, 1024)

        self.deconv5 = deconv(1024,512)
        self.deconv4 = deconv(1026,256)
        self.deconv3 = deconv(770,128)
        self.deconv2 = deconv(386,64)

        self.predict_flow6 = predict_flow(1024)
        self.predict_flow5 = predict_flow(1026)
        self.predict_flow4 = predict_flow(770)
        self.predict_flow3 = predict_flow(386)
        self.predict_flow2 = predict_flow(194)

        self.upsampled_flow6_to_5 = nn.ConvTranspose2d(2, 2, 4, 2, 1, bias=False)
        self.upsampled_flow5_to_4 = nn.ConvTranspose2d(2, 2, 4, 2, 1, bias=False)
        self.upsampled_flow4_to_3 = nn.ConvTranspose2d(2, 2, 4, 2, 1, bias=False)
        self.upsampled_flow3_to_2 = nn.ConvTranspose2d(2, 2, 4, 2, 1, bias=False)

        for m in self.modules():
            if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):
                kaiming_normal_(m.weight, 0.1)
                if m.bias is not None:
                    constant_(m.bias, 0)
            elif isinstance(m, nn.BatchNorm2d):
                constant_(m.weight, 1)
                constant_(m.bias, 0)

    def forward(self, x):
        out_conv2 = self.conv2(self.conv1(x))
        out_conv3 = self.conv3_1(self.conv3(out_conv2))
        out_conv4 = self.conv4_1(self.conv4(out_conv3))
        out_conv5 = self.conv5_1(self.conv5(out_conv4))
        out_conv6 = self.conv6_1(self.conv6(out_conv5))

        flow6       = self.predict_flow6(out_conv6)
        flow6_up    = crop_like(self.upsampled_flow6_to_5(flow6), out_conv5)
        out_deconv5 = crop_like(self.deconv5(out_conv6), out_conv5)

        concat5 = torch.cat((out_conv5,out_deconv5,flow6_up),1)
        flow5       = self.predict_flow5(concat5)
        flow5_up    = crop_like(self.upsampled_flow5_to_4(flow5), out_conv4)
        out_deconv4 = crop_like(self.deconv4(concat5), out_conv4)

        concat4 = torch.cat((out_conv4,out_deconv4,flow5_up),1)
        flow4       = self.predict_flow4(concat4)
        flow4_up    = crop_like(self.upsampled_flow4_to_3(flow4), out_conv3)
        out_deconv3 = crop_like(self.deconv3(concat4), out_conv3)

        concat3 = torch.cat((out_conv3,out_deconv3,flow4_up),1)
        flow3       = self.predict_flow3(concat3)
        flow3_up    = crop_like(self.upsampled_flow3_to_2(flow3), out_conv2)
        out_deconv2 = crop_like(self.deconv2(concat3), out_conv2)

        concat2 = torch.cat((out_conv2,out_deconv2,flow3_up),1)
        flow2 = self.predict_flow2(concat2)

        if self.training:
            return flow2,flow3,flow4,flow5,flow6
        else:
            return flow2

    def weight_parameters(self):
        return [param for name, param in self.named_parameters() if 'weight' in name]

    def bias_parameters(self):
        return [param for name, param in self.named_parameters() if 'bias' in name]
  • 代碼中,deconv都是反捲積層,upsampled_flow也是反捲積層,其他的都是卷積層+bn+LeakyReLU的組合;
  • 代碼中的過程和之前講述的過程完全符合,這一點我在復現的過程中已經覈實過了。
  • 最後發現,在訓練過程輸入的是flow2,flow3等5個尺寸不同的光流場,這自然是爲了計算損失,在論文中雖然沒有提到損失函數,但是從代碼中可以看到使用的是多尺度的損失,(類似於輔助損失的概念)。損失函數在後面會講解。

2 損失函數

import torch
import torch.nn.functional as F


def EPE(input_flow, target_flow, sparse=False, mean=True):
    EPE_map = torch.norm(target_flow-input_flow,2,1)
    batch_size = EPE_map.size(0)
    if sparse:
        # invalid flow is defined with both flow coordinates to be exactly 0
        mask = (target_flow[:,0] == 0) & (target_flow[:,1] == 0)

        EPE_map = EPE_map[~mask]
    if mean:
        return EPE_map.mean()
    else:
        return EPE_map.sum()/batch_size
  • 先看這一部分,計算損失的關鍵就是輸入光流和輸出光流的差值的l2範數;
  • sparse應該是輸入的光流場是稀疏光流還是密集光流,我的實驗中都是密集光流,所以我就無視sparse內的操作了;
def multiscaleEPE(network_output, target_flow, weights=None, sparse=False):
    def one_scale(output, target, sparse):

        b, _, h, w = output.size()

        if sparse:
            target_scaled = sparse_max_pool(target, (h, w))
        else:
            target_scaled = F.interpolate(target, (h, w), mode='area')
        return EPE(output, target_scaled, sparse, mean=False)

    if type(network_output) not in [tuple, list]:
        network_output = [network_output]
    if weights is None:
        weights = [0.005, 0.01, 0.02, 0.08, 0.32]  # as in original article
    assert(len(weights) == len(network_output))

    loss = 0
    for output, weight in zip(network_output, weights):
        loss += weight * one_scale(output, target_flow, sparse)
    return loss
  • 這個部分就是全部的損失了,因爲之前模型部分的代碼輸出的爲(flow2,flow3,flow4,flow5,flow6)這樣的一個tuple形式;
  • 在函數裏,one_scale的方法是把ground truth的光流數據,用intepolate降採樣成和flow尺寸大小相同的grount truth,然後再放到EPE中進行計算;
  • 不同尺寸的特徵圖計算出來的損失,要乘上一個權重,代碼中的權重是flownet論文中的原始權重值。

4 correlation

image.png
這裏和之前的simple版本的區別,在於:先對圖片做了相同的特徵處理,類似於孿生網絡,然後對於提取的兩個特徵圖,做論文中提出的叫做correlation處理,融合成一個特徵圖,然後再做類似於simple版本的後續處理

這裏直接看模型代碼:

class FlowNetC(nn.Module):
    expansion = 1

    def __init__(self,batchNorm=True):
        super(FlowNetC,self).__init__()

        self.batchNorm = batchNorm
        self.conv1      = conv(self.batchNorm,   3,   64, kernel_size=7, stride=2)
        self.conv2      = conv(self.batchNorm,  64,  128, kernel_size=5, stride=2)
        self.conv3      = conv(self.batchNorm, 128,  256, kernel_size=5, stride=2)
        self.conv_redir = conv(self.batchNorm, 256,   32, kernel_size=1, stride=1)

        self.conv3_1 = conv(self.batchNorm, 473,  256)
        self.conv4   = conv(self.batchNorm, 256,  512, stride=2)
        self.conv4_1 = conv(self.batchNorm, 512,  512)
        self.conv5   = conv(self.batchNorm, 512,  512, stride=2)
        self.conv5_1 = conv(self.batchNorm, 512,  512)
        self.conv6   = conv(self.batchNorm, 512, 1024, stride=2)
        self.conv6_1 = conv(self.batchNorm,1024, 1024)

        self.deconv5 = deconv(1024,512)
        self.deconv4 = deconv(1026,256)
        self.deconv3 = deconv(770,128)
        self.deconv2 = deconv(386,64)

        self.predict_flow6 = predict_flow(1024)
        self.predict_flow5 = predict_flow(1026)
        self.predict_flow4 = predict_flow(770)
        self.predict_flow3 = predict_flow(386)
        self.predict_flow2 = predict_flow(194)

        self.upsampled_flow6_to_5 = nn.ConvTranspose2d(2, 2, 4, 2, 1, bias=False)
        self.upsampled_flow5_to_4 = nn.ConvTranspose2d(2, 2, 4, 2, 1, bias=False)
        self.upsampled_flow4_to_3 = nn.ConvTranspose2d(2, 2, 4, 2, 1, bias=False)
        self.upsampled_flow3_to_2 = nn.ConvTranspose2d(2, 2, 4, 2, 1, bias=False)

        for m in self.modules():
            if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):
                kaiming_normal_(m.weight, 0.1)
                if m.bias is not None:
                    constant_(m.bias, 0)
            elif isinstance(m, nn.BatchNorm2d):
                constant_(m.weight, 1)
                constant_(m.bias, 0)

    def forward(self, x):
        x1 = x[:,:3]
        x2 = x[:,3:]

        out_conv1a = self.conv1(x1)
        out_conv2a = self.conv2(out_conv1a)
        out_conv3a = self.conv3(out_conv2a)

        out_conv1b = self.conv1(x2)
        out_conv2b = self.conv2(out_conv1b)
        out_conv3b = self.conv3(out_conv2b)

        out_conv_redir = self.conv_redir(out_conv3a)
        out_correlation = correlate(out_conv3a,out_conv3b)

        in_conv3_1 = torch.cat([out_conv_redir, out_correlation], dim=1)

        out_conv3 = self.conv3_1(in_conv3_1)
        out_conv4 = self.conv4_1(self.conv4(out_conv3))
        out_conv5 = self.conv5_1(self.conv5(out_conv4))
        out_conv6 = self.conv6_1(self.conv6(out_conv5))

        flow6       = self.predict_flow6(out_conv6)
        flow6_up    = crop_like(self.upsampled_flow6_to_5(flow6), out_conv5)
        out_deconv5 = crop_like(self.deconv5(out_conv6), out_conv5)

        concat5 = torch.cat((out_conv5,out_deconv5,flow6_up),1)
        flow5       = self.predict_flow5(concat5)
        flow5_up    = crop_like(self.upsampled_flow5_to_4(flow5), out_conv4)
        out_deconv4 = crop_like(self.deconv4(concat5), out_conv4)

        concat4 = torch.cat((out_conv4,out_deconv4,flow5_up),1)
        flow4       = self.predict_flow4(concat4)
        flow4_up    = crop_like(self.upsampled_flow4_to_3(flow4), out_conv3)
        out_deconv3 = crop_like(self.deconv3(concat4), out_conv3)

        concat3 = torch.cat((out_conv3,out_deconv3,flow4_up),1)
        flow3       = self.predict_flow3(concat3)
        flow3_up    = crop_like(self.upsampled_flow3_to_2(flow3), out_conv2a)
        out_deconv2 = crop_like(self.deconv2(concat3), out_conv2a)

        concat2 = torch.cat((out_conv2a,out_deconv2,flow3_up),1)
        flow2 = self.predict_flow2(concat2)

        if self.training:
            return flow2,flow3,flow4,flow5,flow6
        else:
            return flow2

    def weight_parameters(self):
        return [param for name, param in self.named_parameters() if 'weight' in name]

    def bias_parameters(self):
        return [param for name, param in self.named_parameters() if 'bias' in name]

裏面的關鍵在這個部分:

out_conv_redir = self.conv_redir(out_conv3a)
out_correlation = correlate(out_conv3a,out_conv3b)
in_conv3_1 = torch.cat([out_conv_redir, out_correlation], dim=1)
  • 這裏面的self.conv_redir是卷積層+bn+leakyrelu這一套;
  • 但是關於correlate這個部分中,github作者引用了一個from spatial_correlation_sampler import spatial_correlation_sample,但是這個庫並沒有在代碼中提供,所以關於這個版本的flownet,我也就此作罷。我猜測這個模塊是作者引用別人的代碼,應該在github主頁有說明,但是我這裏上github太卡了,回頭有空再補充這個知識點把。(不過一般也沒有什麼人看文章哈哈,沒人問我的話,那我就忽視這個坑了2333)

3 總結

  • flownet在有些情況下確實很好用,訓練收斂的還挺快。
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章