損失函數SSIM (structural similarity index) 的PyTorch實現

SSIM介紹

結構相似性指數(structural similarity index,SSIM), 出自參考文獻[1],用於度量兩幅圖像間的結構相似性。和被廣泛採用的L2 loss不同,SSIM和人類的視覺系統(HVS)類似,對局部結構變化的感知敏感。

SSIM分爲三個部分:照明度、對比度、結構,分別如下公式所示:

{\displaystyle l(x,y)={\frac {2\mu _{x}\mu _{y}+c_{1}}{\mu _{x}^{2}+\mu _{y}^{2}+c_{1}}}}

{\displaystyle c(x,y)={\frac {2\sigma _{x}\sigma _{y}+c_{2}}{\sigma _{x}^{2}+\sigma _{y}^{2}+c_{2}}}}

{\displaystyle s(x,y)={\frac {\sigma _{xy}+c_{3}}{\sigma _{x}\sigma _{y}+c_{3}}}}

將上面三個式子彙總到一起就是SSIM:

{\hbox{SSIM}}(x,y)={\frac  {(2\mu _{x}\mu _{y}+c_{1})(2\sigma _{{xy}}+c_{2})}{(\mu _{x}^{2}+\mu _{y}^{2}+c_{1})(\sigma _{x}^{2}+\sigma _{y}^{2}+c_{2})}}

其中,上式各符號分別爲圖像x和y的均值、方差和它們的協方差,顯而易見,不贅述。\scriptstyle c_{1}=(k_{1}L)^{2}\scriptstyle c_{2}=(k_{2}L)^{2}爲常數。一般默認\scriptstyle k_{1}=0.01\scriptstyle k_{2}=0.03. L爲像素值的動態範圍,如8-bit深度的圖像的L值爲2^8-1=255.

更詳細的說明可以參考維基百科[2].

Pytorch實現

SSIM值越大代表圖像越相似,當兩幅圖像完全相同時,SSIM=1。所以作爲損失函數時,應該要取負號,例如採用 loss = 1 - SSIM 的形式。由於PyTorch實現了自動求導機制,因此我們只需要實現SSIM loss的前向計算部分即可,不用考慮求導。(具體的求導過程可以參考文獻[3])

以下是代碼實現,來源於github [4].

import torch
import torch.nn.functional as F
from math import exp
import numpy as np


# 計算一維的高斯分佈向量
def gaussian(window_size, sigma):
    gauss = torch.Tensor([exp(-(x - window_size//2)**2/float(2*sigma**2)) for x in range(window_size)])
    return gauss/gauss.sum()


# 創建高斯核,通過兩個一維高斯分佈向量進行矩陣乘法得到
# 可以設定channel參數拓展爲3通道
def create_window(window_size, channel=1):
    _1D_window = gaussian(window_size, 1.5).unsqueeze(1)
    _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0)
    window = _2D_window.expand(channel, 1, window_size, window_size).contiguous()
    return window


# 計算SSIM
# 直接使用SSIM的公式,但是在計算均值時,不是直接求像素平均值,而是採用歸一化的高斯核卷積來代替。
# 在計算方差和協方差時用到了公式Var(X)=E[X^2]-E[X]^2, cov(X,Y)=E[XY]-E[X]E[Y].
# 正如前面提到的,上面求期望的操作採用高斯核卷積代替。
def ssim(img1, img2, window_size=11, window=None, size_average=True, full=False, val_range=None):
    # Value range can be different from 255. Other common ranges are 1 (sigmoid) and 2 (tanh).
    if val_range is None:
        if torch.max(img1) > 128:
            max_val = 255
        else:
            max_val = 1

        if torch.min(img1) < -0.5:
            min_val = -1
        else:
            min_val = 0
        L = max_val - min_val
    else:
        L = val_range

    padd = 0
    (_, channel, height, width) = img1.size()
    if window is None:
        real_size = min(window_size, height, width)
        window = create_window(real_size, channel=channel).to(img1.device)

    mu1 = F.conv2d(img1, window, padding=padd, groups=channel)
    mu2 = F.conv2d(img2, window, padding=padd, groups=channel)

    mu1_sq = mu1.pow(2)
    mu2_sq = mu2.pow(2)
    mu1_mu2 = mu1 * mu2

    sigma1_sq = F.conv2d(img1 * img1, window, padding=padd, groups=channel) - mu1_sq
    sigma2_sq = F.conv2d(img2 * img2, window, padding=padd, groups=channel) - mu2_sq
    sigma12 = F.conv2d(img1 * img2, window, padding=padd, groups=channel) - mu1_mu2

    C1 = (0.01 * L) ** 2
    C2 = (0.03 * L) ** 2

    v1 = 2.0 * sigma12 + C2
    v2 = sigma1_sq + sigma2_sq + C2
    cs = torch.mean(v1 / v2)  # contrast sensitivity

    ssim_map = ((2 * mu1_mu2 + C1) * v1) / ((mu1_sq + mu2_sq + C1) * v2)

    if size_average:
        ret = ssim_map.mean()
    else:
        ret = ssim_map.mean(1).mean(1).mean(1)

    if full:
        return ret, cs
    return ret



# Classes to re-use window
class SSIM(torch.nn.Module):
    def __init__(self, window_size=11, size_average=True, val_range=None):
        super(SSIM, self).__init__()
        self.window_size = window_size
        self.size_average = size_average
        self.val_range = val_range

        # Assume 1 channel for SSIM
        self.channel = 1
        self.window = create_window(window_size)

    def forward(self, img1, img2):
        (_, channel, _, _) = img1.size()

        if channel == self.channel and self.window.dtype == img1.dtype:
            window = self.window
        else:
            window = create_window(self.window_size, channel).to(img1.device).type(img1.dtype)
            self.window = window
            self.channel = channel

        return ssim(img1, img2, window=window, window_size=self.window_size, size_average=self.size_average)

參考來源

[1] Wang Z, Bovik A C, Sheikh H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE transactions on image processing, 2004, 13(4): 600-612.

[2] https://en.wikipedia.org/wiki/Structural_similarity

[3] Zhao H, Gallo O, Frosio I, et al. Loss functions for neural networks for image processing[J]. arXiv preprint arXiv:1511.08861, 2015.

[4] https://github.com/jorge-pessoa/pytorch-msssim

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章