GANFuse: a novel multi-exposure image fusion method based on generative adversarial networks

被引:0
|
作者
Zhiguang Yang
Youping Chen
Zhuliang Le
Yong Ma
机构
[1] Huazhong University of Science and Technology,The State Key Laboratory of Digital Manufacturing Equipment and Technology
[2] Wuhan University,Electronic Information School
来源
关键词
Image fusion; Multi-exposure image; Generative adversarial network;
D O I
暂无
中图分类号
学科分类号
摘要
In this paper, a novel multi-exposure image fusion method based on generative adversarial networks (termed as GANFuse) is presented. Conventional multi-exposure image fusion methods improve their fusion performance by designing sophisticated activity-level measurement and fusion rules. However, these methods have a limited success in complex fusion tasks. Inspired by the recent FusionGAN which firstly utilizes generative adversarial networks (GAN) to fuse infrared and visible images and achieves promising performance, we improve its architecture and customize it in the task of extreme exposure image fusion. To be specific, in order to keep content of extreme exposure image pairs in the fused image, we increase the number of discriminators differentiating between fused image and extreme exposure image pairs. While, a generator network is trained to generate fused images. Through the adversarial relationship between generator and discriminators, the fused image will contain more information from extreme exposure image pairs. Thus, this relationship can realize better performance of fusion. In addition, the method we proposed is an end-to-end and unsupervised learning model, which can avoid designing hand-crafted features and does not require a number of ground truth images for training. We conduct qualitative and quantitative experiments on a public dataset, and the experimental result shows that the proposed model demonstrates better fusion ability than existing multi-exposure image fusion methods in both visual effect and evaluation metrics.
引用
收藏
页码:6133 / 6145
页数:12
相关论文
共 50 条
  • [31] Assessment for multi-exposure image fusion based on fuzzy theory
    Fu, Zheng-Fang
    Zhu, Hong
    Yu, Shun-Yuan
    Elektrotehniski Vestnik/Electrotechnical Review, 2015, 82 (04): : 197 - 204
  • [32] Image registration method based on Generative Adversarial Networks
    Sun, Yujie
    Qi, Heping
    Wang, Chuanyou
    Tao, Lei
    2020 EIGHTH INTERNATIONAL CONFERENCE ON ADVANCED CLOUD AND BIG DATA (CBD 2020), 2020, : 183 - 188
  • [33] HYBRID METHOD FOR MULTI-EXPOSURE IMAGE FUSION BASED ON WEIGHTED MEAN AND SPARSE REPRESENTATION
    Sakai, Takao
    Kimura, Daiki
    Yoshida, Taichi
    Iwahashi, Masahiro
    2015 23RD EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2015, : 809 - 813
  • [34] Enhancing image visuality by multi-exposure fusion
    Yan, Qingsen
    Zhu, Yu
    Zhou, Yulin
    Sun, Jinqiu
    Zhang, Lei
    Zhang, Yanning
    PATTERN RECOGNITION LETTERS, 2019, 127 : 66 - 75
  • [35] A Precise Multi-Exposure Image Fusion Method Based on Low-level Features
    Qi, Guanqiu
    Chang, Liang
    Luo, Yaqin
    Chen, Yinong
    Zhu, Zhiqin
    Wang, Shujuan
    SENSORS, 2020, 20 (06)
  • [36] An Improved Multi-Exposure Image Fusion Algorithm
    Xiang, Huyan
    Ma Xi-rong
    MEMS, NANO AND SMART SYSTEMS, PTS 1-6, 2012, 403-408 : 2200 - 2205
  • [37] Review of Multi-Exposure Image Fusion Methods
    Zhu Xinli
    Zhang Yasheng
    Fang Yuqiang
    Zhang Xitao
    Xu Jieping
    Luo Di
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (22)
  • [38] EMEF: Ensemble Multi-Exposure Image Fusion
    Liu, Renshuai
    Li, Chengyang
    Cao, Haitao
    Zheng, Yinglin
    Zeng, Ming
    Cheng, Xuan
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 2, 2023, : 1710 - 1718
  • [39] Detail preserving multi-exposure image fusion
    Li W.-Z.
    Yi B.-S.
    Qiu K.
    Peng H.
    Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2016, 24 (09): : 2283 - 2292
  • [40] Financial Forecasting Method for Generative Adversarial Networks Based on Multi-model Fusion
    Lin, Pei-Guang
    Li, Qing-Tao
    Zhou, Jia-Qian
    Wang, Ji-Hou
    Jian, Mu-Wei
    Zhang, Chen
    Journal of Computers (Taiwan), 2023, 34 (01): : 131 - 144