A New Real-World Video Dataset for the Comparison of Defogging Algorithms
Abstract
Video restoration for noise removal, deblurring or super-resolution is attracting more and more attention in the
fields of image processing and computer vision. Works on video restoration with data-driven approaches for fog removal are
rare however, due to the lack of datasets containing videos in both clear and foggy conditions which are required for deep
learning and benchmarking. A new dataset, called REVIDE, was recently proposed for just that purpose. In this paper, we
implement the same approach by proposing a new REal-world VIdeo dataset for the comparison of Defogging Algorithms
(VIREDA), with various fog densities and ground truths without fog. This small database can serve as a test base for defogging
algorithms. A video defogging algorithm is also mentioned (still under development), with the key idea of using temporal
redundancy to minimize artefacts and exposure variations between frames. Inspired by the success of Transformers architecture
in deep learning for various applications, we select this kind of architecture in a neural network to show the relevance of the
proposed dataset.
VIREDA Database
Reference
@InProceedings{ jpt-aspai22,
author = {Duminil, A. and Tarel, J.-P. and Br\'emond, R.},
title = {A New Real-World Video Dataset for the Comparison of Defogging Algorithms},
booktitle = {International Conference on Advances in Signal Processing and Artificial Intelligence (ASPAI'22)},
date = {October 19-21},
address = {Corfu, Greece},
year = {2022},
url = {http://perso.lcpc.fr/tarel.jean-philippe/publis/aspai22.html}
}
Pdf file (442 Kb)
(c) IFSA