Autoencoder-Based Collaborative Attention GAN for Multi-Modal Image Synthesis

被引:6
|
作者
Cao, Bing [1 ,2 ]
Cao, Haifang [1 ,3 ]
Liu, Jiaxu [1 ,3 ]
Zhu, Pengfei [1 ,3 ]
Zhang, Changqing [1 ,3 ]
Hu, Qinghua [1 ,3 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300403, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710000, Peoples R China
[3] Tianjin Univ, Haihe Lab Informat echnol Applicat Innovat, Tianjin 300403, Peoples R China
关键词
Image synthesis; Collaboration; Task analysis; Generative adversarial networks; Feature extraction; Data models; Image reconstruction; Multi-modal image synthesis; collaborative attention; single-modal attention; multi-modal attention; TRANSLATION; NETWORK;
D O I
10.1109/TMM.2023.3274990
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-modal images are required in a wide range of practical scenarios, from clinical diagnosis to public security. However, certain modalities may be incomplete or unavailable because of the restricted imaging conditions, which commonly leads to decision bias in many real-world applications. Despite the significant advancement of existing image synthesis techniques, learning complementary information from multi-modal inputs remains challenging. To address this problem, we propose an autoencoder-based collaborative attention generative adversarial network (ACA-GAN) that uses available multi-modal images to generate the missing ones. The collaborative attention mechanism deploys a single-modal attention module and a multi-modal attention module to effectively extract complementary information from multiple available modalities. Considering the significant modal gap, we further developed an autoencoder network to extract the self-representation of target modality, guiding the generative model to fuse target-specific information from multiple modalities. This considerably improves cross-modal consistency with the desired modality, thereby greatly enhancing the image synthesis performance. Quantitative and qualitative comparisons for various multi-modal image synthesis tasks highlight the superiority of our approach over several prior methods by demonstrating more precise and realistic results.
引用
收藏
页码:995 / 1010
页数:16
相关论文
共 50 条
  • [41] MIA-Net: Multi-Modal Interactive Attention Network for Multi-Modal Affective Analysis
    Li, Shuzhen
    Zhang, Tong
    Chen, Bianna
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (04) : 2796 - 2809
  • [42] Multi-modal Remote Sensing Image Description Based on Word Embedding and Self-Attention Mechanism
    Wang, Yuan
    Alifu, Kuerban
    Ma, Hongbing
    Li, Junli
    Halik, Umut
    Lv, Yalong
    2019 3RD INTERNATIONAL SYMPOSIUM ON AUTONOMOUS SYSTEMS (ISAS 2019), 2019, : 358 - 363
  • [43] Multi-modal Remote Sensing Image Description Based on Word Embedding and Self-Attention Mechanism
    Wang, Yuan
    Alifu, Kuerban
    Ma, Hongbing
    Li, Junli
    Halik, Umut
    Lv, Yalong
    3rd International Symposium on Autonomous Systems, ISAS 2019, 2019, : 358 - 363
  • [44] Unsupervised multi-modal image translation based on the squeeze-and-excitation mechanism and feature attention module
    胡振涛
    HU Chonghao
    YANG Haoran
    SHUAI Weiwei
    High Technology Letters, 2024, 30 (01) : 23 - 30
  • [45] Unsupervised multi-modal image translation based on the squeeze-and-excitation mechanism and feature attention module
    Hu Z.
    Hu C.
    Yang H.
    Shuai W.
    High Technology Letters, 2024, 30 (01) : 23 - 30
  • [46] Automatic Medical Image Report Generation with Multi-view and Multi-modal Attention Mechanism
    Yang, Shaokang
    Niu, Jianwei
    Wu, Jiyan
    Liu, Xuefeng
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT III, 2020, 12454 : 687 - 699
  • [47] Multi-modal semantic image segmentation
    Pemasiri, Akila
    Kien Nguyen
    Sridharan, Sridha
    Fookes, Clinton
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2021, 202
  • [48] Multi-modal Attention for Speech Emotion Recognition
    Pan, Zexu
    Luo, Zhaojie
    Yang, Jichen
    Li, Haizhou
    INTERSPEECH 2020, 2020, : 364 - 368
  • [49] A Multi-modal Attention System for Smart Environments
    Schauerte, B.
    Ploetz, T.
    Fink, G. A.
    COMPUTER VISION SYSTEMS, PROCEEDINGS, 2009, 5815 : 73 - +
  • [50] A coupled autoencoder approach for multi-modal analysis of cell types
    Gala, Rohan
    Gouwens, Nathan
    Yao, Zizhen
    Budzillo, Agata
    Penn, Osnat
    Tasic, Bosiljka
    Murphy, Gabe
    Zeng, Hongkui
    Sumbul, Uygar
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32