Detail Enhancement Dehazing Method Based on Weighted Least Squares

被引:0
|
作者
Chen X. [1 ,2 ]
Yu H. [2 ]
Yang L. [3 ]
Zheng X. [1 ]
Zheng S. [2 ]
机构
[1] School of Intelligent Manufacturing, Chongqing University of Arts and Sciences, Chongqing
[2] School of Mechatronics and Vehicle Engineering, Chongqing Jiaotong University, Chongqing
[3] Special Vehicle Research Institute, Chongqing Changan Industry(Group) Co., Ltd., Chongqing
关键词
dehaze algorithm; detail enhancement; morphological cascade; quadtree algorithm; weighted least squares;
D O I
10.15918/j.tbit1001-0645.2022.239
中图分类号
学科分类号
摘要
The appearance of foggy weather makes the collected images have low contrast and weak identify ability. In order to improve the functional stability of the vision system, an image defogging algorithm was proposed in this paper. To solve the halo problem existed in the dehazing image with the current popular dehazing algorithm, a morphological cascade algorithm was proposed to estimate the scene transmittance. On this basis, in order to make the scene transmittance keeping clear edge details and maintaining image smoothness, a weighted least square method was introduced to refine the transmittance. And then, considering image facing highlights and large areas of bright white, an improved hierarchical search algorithm was used based on the quad-tree method to accurately estimate the atmospheric light value in the sky area. Finally, the atmospheric scattering model was introduced, and the calculated transmittance and atmospheric light value were input the model to obtain dehazing image. Comparing the results with that of the current popular defogging algorithms qualitatively and quantitatively, the proposed method show that it can provide a better recovery effect in preserving depth edges, color quality and details, getting more clearly in-depth details, color quality and more natural. The proposed algorithm was tested on commonly used dehaze images and O-haze datasets. The test results show that the mean square error reduction can reach up to 7.0%, the signal-to-noise ratio parameter can be increased by 21%. Compared with the comparison algorithm in the structural similarity parameter and the color information entropy parameter, the proposed algorithm can be improved by 32% and 1.1% respectively. © 2023 Beijing Institute of Technology. All rights reserved.
引用
下载
收藏
页码:803 / 811
页数:8
相关论文
共 13 条
  • [1] HE K, SUN J, TANG X., Single image haze removal using dark channel prior[J], IEEE transactions on Pattern Analysis and Machine Intelligence, 33, 12, pp. 2341-2353, (2010)
  • [2] KIM J H, JANG W D, SIM J Y, Et al., Optimized contrast enhancement for real-time image and video dehazing[J], Visual Communication and Image Represent, 24, 3, pp. 410-425, (2013)
  • [3] HE K, SUN J, TANG X., Guided image filtering[J], IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 6, pp. 1397-1409, (2012)
  • [4] SUN Kang, WANG Bo, ZHOU Zhiqiang, Et al., Real time image haze removal using bilateral filter, Transactions of Beijing Institute of Technology, 7, pp. 810-813, (2011)
  • [5] SHEN Helong, YIN Yong, XIA Guilin, Et al., Research on real-time haze removal algorithm for marine video, Transactions of Beijing Institute of Technology, 38, 4, pp. 381-386, (2018)
  • [6] WANG Jianzhong, XU Haonan, WANG Hongfeng, Et al., Infrared and visible image fusion based on residual dense block and auto-encoder network, Transactions of Beijing Institute of Technology, 41, 10, pp. 1077-1083, (2021)
  • [7] SHWARTZ S, SCHECHNER Y Y., Blind haze separation[C], 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), pp. 1984-1991, (2006)
  • [8] PASCAL G., Automatic Color Enhancement (ACE) and its fast implementation[J], Image Processing on Line, 2, pp. 266-277, (2012)
  • [9] RAANAN, FATTAL, Dehazing using color-lines[J], ACM Transactions on Graphics (TOG), 34, 1, pp. 1-14, (2014)
  • [10] QU Y, CHEN Y, HUANG J, Et al., Enhanced pix2pix dehazing network[C], Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8160-8168, (2019)