Single Image Atmospheric Veil Removal Using New Priors for Better Genericity
Abstract
From an analysis of the priors used in state-of-the-art algorithms for single image defogging, a new prior is proposed to obtain a better atmospheric veil removal. Our hypothesis is based on a physical model, considering that the fog appears denser near the horizon rather than close to the camera. It leads to more restoration when the fog depth is more important, for a more natural rendering. For this purpose, the Naka-Rushton function is used to modulate the atmospheric veil according to empirical observations on synthetic foggy images. The parameters of this function are set from features of the input image. This method also prevents over-restoration and thus preserves the sky from artifacts and noises. The algorithm generalizes to different kinds of fog, airborne particles, and illumination conditions. The proposed method is extended to the nighttime and underwater images by computing the atmospheric veil on each color channel. Qualitative and quantitative evaluations show the benefit of the proposed algorithm. The quantitative evaluation shows the efficiency of the algorithm on four databases with different types of fog, which demonstrates the broad generalization allowed by the proposed algorithm, in contrast with most of the currently available deep learning techniques.
Source Code
Reference
@ARTICLE{jpt-ja21,
author = {Duminil, A. and Tarel, J.-P. and Br\'emond, R.},
title = {Single Image Atmospheric Veil Removal Using New Priors for Better Genericity},
journal = {Atmosphere},
volume = {12},
number = {6},
year = {2021},
article-number= {772},
pages = {1--21},
month = jun,
url = {http://perso.lcpc.fr/tarel.jean-philippe/publis/ja21.html},
doi = {https://doi.org/10.3390/atmos12060772}
}
Pdf file (20 635 Kb)
(c) MDPI