Multi-Focus Image Fusion Based on Generative Adversarial Network

被引:0
|
作者
Jiang L. [1 ]
Zhang D. [2 ]
Pan B. [2 ]
Zheng P. [2 ]
Che L. [1 ]
机构
[1] School of Information and Communication, Guilin University of Electronic Technology, Guilin
[2] School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin
关键词
Generative adversarial network; Loss function; Multi-focus image fusion; U-Net;
D O I
10.3724/SP.J.1089.2021.18770
中图分类号
学科分类号
摘要
Multi-focus image fusion can fuse a series of images that have a different focus in the same scene. To overcome the disadvantage extraction of the blurring character in multi-focus images, a generative adversarial network model based on U-Net is proposed. Firstly, the generator uses U-Net and SSE to extract the feature of the multi-focus image, and fuses images. Then, the discriminator uses convolutional layers to distinguish the fused result between the existed and the generative. Furthermore, a loss function has the loss of adversarial in the generator, loss of mapping, loss of gradient, loss mean square error and the loss of adversarial in the discriminator. The train data of generative adversarial network uses the dataset of Pascal VOC2012 to generate and includes near-focus image, far-focus image, mapping image and all-in-focus image. The experimental result shows that the proposed generative adversarial network model can effectively extract the blurring feature in multi-focus images, and the fused image have good performances on mutual information, phase congruency and structural similarity. © 2021, Beijing China Science Journal Publishing Co. Ltd. All right reserved.
引用
收藏
页码:1715 / 1725
页数:10
相关论文
共 29 条
  • [1] Song Ruixia, Wang Meng, Wang Xiaochun, A multi-focus image fusion algorithm combining NSCT with edge detection, Journal of Computer-Aided Design & Computer Graphics, 28, 12, pp. 2134-2141, (2016)
  • [2] Li S T, Kang X D, Fang L Y, Et al., Pixel-level image fusion: a survey of the state of the art, Information Fusion, 33, pp. 100-112, (2017)
  • [3] Wang Z B, Chen L N, Li J, Et al., Multi-focus image fusion with random walks and guided filters, Multimedia Systems, 25, 4, pp. 323-335, (2019)
  • [4] Du C B, Gao S S., Multi-focus image fusion algorithm based on pulse coupled neural networks and modified decision map, Optik, 157, pp. 1003-1015, (2018)
  • [5] Li S T, Yang B., Multi-focus image fusion by combining curvelet and wavelet transform, Pattern Recognition Letters, 29, 9, pp. 1295-1301, (2008)
  • [6] Anandhi D, Valli S., An algorithm for multi-sensor image fusion using maximum a posteriori and nonsubsampled contourlet transform, Computers & Electrical Engineering, 65, pp. 139-152, (2018)
  • [7] Xu H, Ma J Y, Zhang X P., MEF-GAN: multi-exposure image fusion via generative adversarial networks, IEEE Transactions on Image Processing, 29, pp. 7203-7216, (2020)
  • [8] Ma J Y, Liang P W, Yu W, Et al., Infrared and visible image fusion via detail preserving adversarial learning, Information Fusion, 54, pp. 85-98, (2020)
  • [9] Ian G, Jean P A, Mehdi M, Et al., Generative adversarial nets, Proceedings of the 27th International Conference on Neural Information Processing Systems, pp. 2672-2680, (2014)
  • [10] Ma J Y, Yu W, Liang P W, Et al., FusionGAN: a generative adversarial network for infrared and visible image fusion, Information Fusion, 48, pp. 11-26, (2019)