Autoencoder-Based Collaborative Attention GAN for Multi-Modal Image Synthesis

被引:6
|
作者
Cao, Bing [1 ,2 ]
Cao, Haifang [1 ,3 ]
Liu, Jiaxu [1 ,3 ]
Zhu, Pengfei [1 ,3 ]
Zhang, Changqing [1 ,3 ]
Hu, Qinghua [1 ,3 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300403, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710000, Peoples R China
[3] Tianjin Univ, Haihe Lab Informat echnol Applicat Innovat, Tianjin 300403, Peoples R China
关键词
Image synthesis; Collaboration; Task analysis; Generative adversarial networks; Feature extraction; Data models; Image reconstruction; Multi-modal image synthesis; collaborative attention; single-modal attention; multi-modal attention; TRANSLATION; NETWORK;
D O I
10.1109/TMM.2023.3274990
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-modal images are required in a wide range of practical scenarios, from clinical diagnosis to public security. However, certain modalities may be incomplete or unavailable because of the restricted imaging conditions, which commonly leads to decision bias in many real-world applications. Despite the significant advancement of existing image synthesis techniques, learning complementary information from multi-modal inputs remains challenging. To address this problem, we propose an autoencoder-based collaborative attention generative adversarial network (ACA-GAN) that uses available multi-modal images to generate the missing ones. The collaborative attention mechanism deploys a single-modal attention module and a multi-modal attention module to effectively extract complementary information from multiple available modalities. Considering the significant modal gap, we further developed an autoencoder network to extract the self-representation of target modality, guiding the generative model to fuse target-specific information from multiple modalities. This considerably improves cross-modal consistency with the desired modality, thereby greatly enhancing the image synthesis performance. Quantitative and qualitative comparisons for various multi-modal image synthesis tasks highlight the superiority of our approach over several prior methods by demonstrating more precise and realistic results.
引用
下载
收藏
页码:995 / 1010
页数:16
相关论文
共 50 条
  • [1] Autoencoder-Based Collaborative Filtering
    Ouyang, Yuanxin
    Liu, Wenqi
    Rong, Wenge
    Xiong, Zhang
    NEURAL INFORMATION PROCESSING, ICONIP 2014, PT III, 2014, 8836 : 284 - 291
  • [2] Cross-modal attention for multi-modal image registration
    Song, Xinrui
    Chao, Hanqing
    Xu, Xuanang
    Guo, Hengtao
    Xu, Sheng
    Turkbey, Baris
    Wood, Bradford J.
    Sanford, Thomas
    Wang, Ge
    Yan, Pingkun
    MEDICAL IMAGE ANALYSIS, 2022, 82
  • [3] Semantically Multi-modal Image Synthesis
    Zhu, Zhen
    Xu, Zhiliang
    You, Ansheng
    Bai, Xiang
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 5466 - 5475
  • [4] Autoencoder-based Image Companding
    Wicaksono, Alim H. P.
    Prasetyo, Heri
    Guo, Jing-Ming
    2020 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TAIWAN), 2020,
  • [5] Multi-Modal MRI Image Synthesis via GAN With Multi-Scale Gate Mergence
    Zhan, Bo
    Li, Di
    Wu, Xi
    Zhou, Jiliu
    Wang, Yan
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (01) : 17 - 26
  • [6] Swin transformer-based GAN for multi-modal medical image translation
    Yan, Shouang
    Wang, Chengyan
    Chen, Weibo
    Lyu, Jun
    FRONTIERS IN ONCOLOGY, 2022, 12
  • [7] An efficient method for autoencoder-based collaborative filtering
    Wang, Yi-Lei
    Tang, Wen-Zhe
    Yang, Xian-Jun
    Wu, Ying-Jie
    Chen, Fu-Ji
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2019, 31 (23):
  • [8] Autoencoder-based holographic image restoration
    Shimobaba, Tomoyoshi
    Endo, Yutaka
    Hirayama, Ryuji
    Nagahama, Yuki
    Takahashi, Takayuki
    Nishitsuji, Takashi
    Kakue, Takashi
    Shiraki, Atsushi
    Takada, Naoki
    Masuda, Nobuyuki
    Ito, Tomoyoshi
    APPLIED OPTICS, 2017, 56 (13) : F27 - F30
  • [9] Multi-modal Sentence Summarization with Modality Attention and Image Filtering
    Li, Haoran
    Zhu, Junnan
    Liu, Tianshang
    Zhang, Jiajun
    Zong, Chengqing
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 4152 - 4158
  • [10] Variational Autoencoder-Based Multiple Image Captioning Using a Caption Attention Map
    Kim, Boeun
    Shin, Saim
    Jung, Hyedong
    APPLIED SCIENCES-BASEL, 2019, 9 (13):