[論文速讀] 圖像增強系列:2020 TIP 最新去霧算法(摘要、網絡結構圖及論文鏈接)

[論文速讀] 圖像增強系列:2020 TIP 最新去霧算法(摘要、網絡結構圖及論文鏈接)

本博客先介紹 2020 TIP 最新去霧算法的摘要、網絡結構圖及論文鏈接,後續將陸續補充較爲詳細的內容。

目錄

1  Task-Oriented Network for Image Dehazing

Abstract

Method

2  Zero-Shot Image Dehazing

Abstract

Method

3  Dehazing Evaluation: Real-World Benchmark Datasets, Criteria, and Baselines

Abstract

Method

4  Fusion of Heterogeneous Adversarial Networks for Single Image Dehazing

Abstract

Method

5  FAMED-Net: A Fast and Accurate Multi-Scale End-to-End Dehazing Network

Abstract

Method

6  Fast Single Image Dehazing Using Saturation Based Transmission Map Estimation

Abstract

Method

7  Radiance–Reflectance Combined Optimization and Structure-Guided 0-Norm for Single Image Dehazing

Abstract

Method


1  Task-Oriented Network for Image Dehazing

https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9088248

Abstract

Haze interferes the transmission of scene radiation and significantly degrades color and details of outdoor images. Existing deep neural networks-based image dehazing algorithms usually use some common networks. The network design does not model the image formation of haze process well, which accordingly leads to dehazed images containing artifacts and haze residuals in some special scenes. In this paper, we propose a task-oriented network for image dehazing, where the network design is motivated by the image formation of haze process. The task-oriented network involves a hybrid network containing an encoder and decoder network and a spatially variant recurrent neural network which is derived from the hazy process. In addition, we develop a multi-stage dehazing algorithm to further improve the accuracy by filtering haze residuals in a step-bystep fashion. To constrain the proposed network, we develop a dual composition loss, content-based pixel-wise loss and total variation constraint. We train the proposed network in an end-to-end manner and analyze its effect on image dehazing. Experimental results demonstrate that the proposed algorithm achieves favorable performance against state-of-the-art dehazing methods.

Method

 

2  Zero-Shot Image Dehazing

https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9170880

Abstract

In this article, we study two less-touched challenging problems in single image dehazing neural networks, namely, how to remove haze from a given image in an unsupervised and zeroshot manner. To the ends, we propose a novel method based on the idea of layer disentanglement by viewing a hazy image as the entanglement of several “simpler” layers, i.e., a hazy-free image layer, transmission map layer, and atmospheric light layer. The major advantages of the proposed ZID are two-fold. First, it is an unsupervised method that does not use any clean images including hazy-clean pairs as the ground-truth. Second, ZID is a “zero-shot” method, which just uses the observed single hazy image to perform learning and inference. In other words, it does not follow the conventional paradigm of training deep model on a large scale dataset. These two advantages enable our method to avoid the labor-intensive data collection and the domain shift issue of using the synthetic hazy images to address the real-world images. Extensive comparisons show the promising performance of our method compared with 15 approaches in the qualitative and quantitive evaluations. The source code could be found at www.pengxi.me.

Method

 

 

 

3  Dehazing Evaluation: Real-World Benchmark Datasets, Criteria, and Baselines

https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9099036

Abstract

On benchmark images, modern dehazing methods are able to achieve very comparable results whose differences are too subtle for people to qualitatively judge. Thus, it is imperative to adopt quantitative evaluation on a vast number of hazy images. However, existing quantitative evaluation schemes are not convincing due to a lack of appropriate datasets and poor correlations between metrics and human perceptions. In this work, we attempt to address these issues, and we make two contributions. First, we establish two benchmark datasets, i.e., the BEnchmark Dataset for Dehazing Evaluation (BeDDE) and the EXtension of the BeDDE (exBeDDE), which had been lacking for a long period of time. The BeDDE is used to evaluate dehazing methods via full reference image quality assessment (FR-IQA) metrics. It provides hazy images, clear references, haze level labels, and manually labeled masks that indicate the regions of interest (ROIs) in image pairs. The exBeDDE is used to assess the performance of dehazing evaluation metrics. It provides extra dehazed images and subjective scores from people. To the best of our knowledge, the BeDDE is the first dehazing dataset whose image pairs were collected in natural outdoor scenes without any simulation. Second, we provide a new insight that dehazing involves two separate aspects, i.e., visibility restoration and realness restoration, which should be evaluated independently; thus, to characterize them, we establish two criteria, i.e., the visibility index (VI) and the realness index (RI), respectively. The effectiveness of the criteria is verified through extensive experiments. Furthermore, 14 representative dehazing methods are evaluated as baselines using our criteria on BeDDE. Our datasets and relevant code are available at https://github.com/xiaofeng94/BeDDE-for-defogging.

Method

能見度指數 VI

真實指數的計算流程圖

 

4  Fusion of Heterogeneous Adversarial Networks for Single Image Dehazing

https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9018375

Abstract

In this paper, we propose a novel image dehazing method. Typical deep learning models for dehazing are trained on paired synthetic indoor dataset. Therefore, these models may be effective for indoor image dehazing but less so for outdoor images. We propose a heterogeneous Generative Adversarial Networks (GAN) based method composed of a cycle-consistent Generative Adversarial Networks (CycleGAN) for producing haze-clear images and a conditional Generative Adversarial Networks (cGAN) for preserving textural details. We introduce a novel loss function in the training of the fused network to minimize GAN generated artifacts, to recover fine details, and to preserve color components. These networks are fused via a convolutional neural network (CNN) to generate dehazed image. Extensive experiments demonstrate that the proposed method significantly outperforms the state-of-the-art methods on both synthetic and real-world hazy images.

Method

 

 

 

 

5  FAMED-Net: A Fast and Accurate Multi-Scale End-to-End Dehazing Network

https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8753731

Abstract

Single image dehazing is a critical image pre-processing step for subsequent high-level computer vision tasks. However, it remains challenging due to its ill-posed nature. Existing dehazing models tend to suffer from model overcomplexity and computational inefficiency or have limited representation capacity. To tackle these challenges, here, we propose a fast and accurate multi-scale end-to-end dehazing network, called FAMED-Net, which comprises encoders at three scales and a fusion module to efficiently and directly learn the haze-free image. Each encoder consists of cascaded and densely connected point-wise convolutional layers and pooling layers. Since no larger convolutional kernels are used and features are reused layer-by-layer, FAMED-Net is lightweight and computationally efficient. Thorough empirical studies on public synthetic datasets (including RESIDE) and real-world hazy images demonstrate the superiority of FAMED-Net over other representative state-ofthe-art models with respect to model complexity, computational efficiency, restoration accuracy, and cross-set generalization. The code will be made publicly available.

Method

 

 

 

6  Fast Single Image Dehazing Using Saturation Based Transmission Map Estimation

https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8882514

Abstract

Single image dehazing has been a challenging problem because of its ill-posed nature. For this reason, numerous efforts have been made in the field of haze removal. This paper proposes a simple, fast, and powerful algorithm for haze removal. The medium transmission is derived as a function of the saturation of the scene radiance only, and the saturation of scene radiance is estimated using a simple stretching method. A different medium transmission can be estimated for each pixel because this method does not assume that transmission is constant in a small patch. Furthermore, this paper presents a color veil removing algorithm, which is useful for an image with fine or yellow dust, using the white balance technique. The proposed algorithm requires no training, prior, and refinement process. The simulation results show that the proposed dehazing scheme outperforms state-of-the-art dehazing approaches in terms of both computational complexity and dehazing efficiency

Method

 

______________________

推薦另外一篇 IEEE Transactions on Multimedia:

7  Radiance–Reflectance Combined Optimization and Structure-Guided 0-Norm for Single Image Dehazing

https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8734728

github: https://github.com/JoongcholShin/RRO-Dehazing

Abstract

Outdoor images are subject to degradation regarding contrast and color because atmospheric particles scatter incoming light to a camera. Existing haze models that employ model-based dehazing methods cannot avoid the dehazing artifacts. These artifacts include color distortion and overenhancement around object boundaries because of the incorrect transmission estimation from a depth error in the skyline and the wrong haze information, especially in bright objects. To overcome this problem, we present a novel optimization-based dehazing algorithm that combines radiance and reflectance components with an additional refinement using a structure-guided 0-norm filter. More specifically, we first estimate a weak reflectance map and optimize the transmission map based on the estimated reflectance map. Next, we estimate the structure-guided 0 transmission map to remove the dehazing artifacts. The experimental results show that the proposed method outperforms state-of-the-art algorithms in terms of qualitative and quantitative measures compared with simulated image pairs. In addition, the real-world enhancement results demonstrate that the proposed method can provide a high-quality image without undesired artifacts. Furthermore, the guided 0-norm filter can remove textures while preserving edges for general image enhancement algorithms.

 

Method

 

 

 

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章