Noise-robust Deep Cross-Modal Hashing

被引:13
|
作者
Wang, Runmin [1 ,2 ]
Yu, Guoxian [1 ,2 ]
Zhang, Hong [1 ]
Guo, Maozu [4 ]
Cui, Lizhen [2 ]
Zhang, Xiangliang [3 ]
机构
[1] Southwest Univ, Coll Comp & Informat Sci, Chongqing, Peoples R China
[2] Shandong Univ, Sch Software, Jinan, Peoples R China
[3] King Abdullah Univ Sci & Technol, CEMSE, Thuwal, Saudi Arabia
[4] Beijing Univ Civil Engn & Architecture, Coll Elect & Informat Engn, Beijing, Peoples R China
关键词
Cross-modal hashing; Noise labels; Deep learning; Feature similarity; Label similarity;
D O I
10.1016/j.ins.2021.09.030
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cross-modal hashing has been intensively studied to efficiently retrieve multi-modal data across modalities. Supervised cross-modal hashing methods leverage the labels of training data to improve the retrieval performance. However, most of these methods still assume that the semantic labels of training data are ideally complete and noise-free. This assumption is too optimistic for real multi-modal data, whose label annotations are, in essence, error-prone. To achieve effective cross-modal hashing on multi-modal data with noisy labels, we introduce an end-to-end solution called Noise-robust Deep Cross-modal Hashing (NrDCMH). NrDCMH contains two main components: a noise instance detection module and a hash code learning module. In the noise detection module, NrDCMH firstly detects noisy training instance pairs based on the margin between the label similarity and feature similarity, and specifies weights to pairs using the margin. In the hash learning module, NrDCMH incorporates the weights into a likelihood loss function to reduce the impact of instances with noisy labels and to learn compatible deep features by applying different neural networks on multi-modality data in a unified end-to-end framework. Experimental results on multi-modal benchmark datasets demonstrate that NrDCMH performs significantly better than competitive methods with noisy label annotations. NrDCMH also achieves competitive results in 'noise-free' scenarios. (c) 2021 Elsevier Inc. All rights reserved.
引用
收藏
页码:136 / 154
页数:19
相关论文
共 50 条
  • [1] A Label Noise Robust Cross-Modal Hashing Approach
    Wang, Runmin
    Yang, Yuanlin
    Han, Guangyang
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, KSEM 2021, PT II, 2021, 12816 : 577 - 589
  • [2] Deep Cross-Modal Hashing
    Jiang, Qing-Yuan
    Li, Wu-Jun
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3270 - 3278
  • [3] Cross-Lingual Cross-Modal Retrieval with Noise-Robust Learning
    Wang, Yabing
    Dong, Jianfeng
    Liang, Tianxiang
    Zhang, Minsong
    Cai, Rui
    Wang, Xun
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022,
  • [4] RGBT Tracking via Noise-Robust Cross-Modal Ranking
    Li, Chenglong
    Xiang, Zhiqiang
    Tang, Jin
    Luo, Bin
    Wang, Futian
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) : 5019 - 5031
  • [5] Deep Cross-Modal Proxy Hashing
    Tu, Rong-Cheng
    Mao, Xian-Ling
    Tu, Rong-Xin
    Bian, Binbin
    Cai, Chengfei
    Wang, Hongfa
    Wei, Wei
    Huang, Heyan
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (07) : 6798 - 6810
  • [6] Deep Lifelong Cross-Modal Hashing
    Xu, Liming
    Li, Hanqi
    Zheng, Bochuan
    Li, Weisheng
    Lv, Jiancheng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (12) : 13478 - 13493
  • [7] Semantic deep cross-modal hashing
    Lin, Qiubin
    Cao, Wenming
    He, Zhihai
    He, Zhiquan
    NEUROCOMPUTING, 2020, 396 (396) : 113 - 122
  • [8] Asymmetric Deep Cross-modal Hashing
    Gu, Jingzi
    Zhang, JinChao
    Lin, Zheng
    Li, Bo
    Wang, Weiping
    Meng, Dan
    COMPUTATIONAL SCIENCE - ICCS 2019, PT V, 2019, 11540 : 41 - 54
  • [9] Cross-Modal Deep Variational Hashing
    Liong, Venice Erin
    Lu, Jiwen
    Tan, Yap-Peng
    Zhou, Jie
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 4097 - 4105
  • [10] Boosting deep cross-modal retrieval hashing with adversarially robust training
    Zhang, Xingwei
    Zheng, Xiaolong
    Mao, Wenji
    Zeng, Daniel Dajun
    APPLIED INTELLIGENCE, 2023, 53 (20) : 23698 - 23710