Deep medical cross-modal attention hashing

被引:0
|
作者
Yong Zhang
Weihua Ou
Yufeng Shi
Jiaxin Deng
Xinge You
Anzhi Wang
机构
[1] Guizhou Normal University,School of Big Data and Computer Science, School of Mathematics and Sciences
[2] Special Key Laboratory of Artificial Intelligence and Intelligent Control of Guizhou Province,School of Computer Science and Telecommunication Engineering
[3] Huazhong University of Science and Technology,undefined
来源
World Wide Web | 2022年 / 25卷
关键词
Medical cross-modal retrieval; Recurrent attention; Hashing code; Discriminative representation learning;
D O I
暂无
中图分类号
学科分类号
摘要
Medical cross-modal retrieval aims to retrieve semantically similar medical instances across different modalities, such as retrieving X-ray images using radiology reports or retrieving radiology reports using X-ray images. The main challenge for medical cross-modal retrieval are the semantic gap and the small visual differences between different categories of medical images. To address those issues, we present a novel end-to-end deep hashing method, called Deep Medical Cross-Modal Attention Hashing (DMCAH), which extracts the global features utilizing global average pooling and local features by recurrent attention. Specifically, we recursively move from the coarse to fine-grained regions of images to locate discriminative regions more accurately, and recursively extract the discriminative semantic information of texts from the sentence level to the word level. Then, we select the discriminative features by aggregating the finer feature via adaptive attention. Finally, to reduce the semantic gap, we map images and reports features into a common space and obtain the discriminative hash codes. Comprehensive experimental results on large-scale medical dataset MIMIC-CXR and natural scene dataset MS-COCO show that DMCAH can achieve better performance than existing cross-modal hashing methods.
引用
收藏
页码:1519 / 1536
页数:17
相关论文
共 50 条
  • [1] Deep medical cross-modal attention hashing
    Zhang, Yong
    Ou, Weihua
    Shi, Yufeng
    Deng, Jiaxin
    You, Xinge
    Wang, Anzhi
    [J]. WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2022, 25 (04): : 1519 - 1536
  • [2] Deep Cross-Modal Hashing
    Jiang, Qing-Yuan
    Li, Wu-Jun
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3270 - 3278
  • [3] Deep semantic hashing with dual attention for cross-modal retrieval
    Jiagao Wu
    Weiwei Weng
    Junxia Fu
    Linfeng Liu
    Bin Hu
    [J]. Neural Computing and Applications, 2022, 34 : 5397 - 5416
  • [4] TEACH: Attention-Aware Deep Cross-Modal Hashing
    Yao, Hong-Lei
    Zhan, Yu-Wei
    Chen, Zhen-Duo
    Luo, Xin
    Xu, Xin-Shun
    [J]. PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR '21), 2021, : 376 - 384
  • [5] A novel deep translated attention hashing for cross-modal retrieval
    Haibo Yu
    Ran Ma
    Min Su
    Ping An
    Kai Li
    [J]. Multimedia Tools and Applications, 2022, 81 : 26443 - 26461
  • [6] Deep semantic hashing with dual attention for cross-modal retrieval
    Wu, Jiagao
    Weng, Weiwei
    Fu, Junxia
    Liu, Linfeng
    Hu, Bin
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (07): : 5397 - 5416
  • [7] A novel deep translated attention hashing for cross-modal retrieval
    Yu, Haibo
    Ma, Ran
    Su, Min
    An, Ping
    Li, Kai
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (18) : 26443 - 26461
  • [8] Deep Cross-Modal Proxy Hashing
    Tu, Rong-Cheng
    Mao, Xian-Ling
    Tu, Rong-Xin
    Bian, Binbin
    Cai, Chengfei
    Wang, Hongfa
    Wei, Wei
    Huang, Heyan
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (07) : 6798 - 6810
  • [9] Semantic deep cross-modal hashing
    Lin, Qiubin
    Cao, Wenming
    He, Zhihai
    He, Zhiquan
    [J]. NEUROCOMPUTING, 2020, 396 : 113 - 122
  • [10] Asymmetric Deep Cross-modal Hashing
    Gu, Jingzi
    Zhang, JinChao
    Lin, Zheng
    Li, Bo
    Wang, Weiping
    Meng, Dan
    [J]. COMPUTATIONAL SCIENCE - ICCS 2019, PT V, 2019, 11540 : 41 - 54