Deep online cross-modal hashing by a co-training mechanism

被引:5
|
作者
Xie, Yicai [1 ,2 ]
Zeng, Xianhua [1 ]
Wang, Tinghua [2 ]
Yi, Yun [2 ]
Xu, Liming [3 ]
机构
[1] Chongqing Univ Posts & Telecommun, Coll Comp Sci & Technol, Chongqing Key Lab Image Cognit, Chongqing 400065, Peoples R China
[2] Gannan Normal Univ, Sch Math & Comp Sci, Ganzhou 341000, Jiangxi, Peoples R China
[3] China West Normal Univ, Sch Comp Sci, Nanchong 637002, Sichuan, Peoples R China
基金
中国国家自然科学基金;
关键词
Online hashing; Cross -modal retrieval; Online learning; Knowledge distillation; Deep learning;
D O I
10.1016/j.knosys.2022.109888
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Batch-based cross-modal hashing retrieval methods have made great progress. However, they could not be applied in scenarios where new data continuously arrives in a stream. To this end, a few online cross-modal hashing retrieval methods have been proposed. However, they are based on shallow models, which may result in a suboptimal retrieval performance. Therefore, we propose Deep Online Cross-modal Hashing by a Co-training Mechanism (DOCHCM), which introduces deep learning to online cross-modal hashing by cooperatively training two subnetworks with two stages. DOCHCM addresses two problematics aspects. First, in each round, the image sub-network incrementally learns the hash codes of the current chunk of images by preserving the semantic similarity between their output features and the hash codes of the whole texts. The text sub-network incrementally learns the hash codes of the current chunk of texts by preserving the semantic similarity between their output features and the hash codes of the whole images. Second, knowledge distillation is leveraged to the image and text sub-networks to avoid catastrophic forgetting, which enables the two sub-networks not only to learn new knowledge but also to prevent the forgetting of the old knowledge. Extensive experiments on three benchmark datasets demonstrate that DOCHCM outperforms the state-of-the-art cross-modal hashing retrieval methods.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Deep Cross-Modal Hashing
    Jiang, Qing-Yuan
    Li, Wu-Jun
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3270 - 3278
  • [2] Online Discriminative Cross-Modal Hashing
    Kang, Xiao
    Liu, Xingbo
    Zhang, Xuening
    Nie, Xiushan
    Yin, Yilong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 5242 - 5254
  • [3] Discrete online cross-modal hashing
    Zhan, Yu-Wei
    Wang, Yongxin
    Sun, Yu
    Wu, Xiao-Ming
    Luo, Xin
    Xu, Xin-Shun
    PATTERN RECOGNITION, 2022, 122
  • [4] Online deep hashing for both uni-modal and cross-modal retrieval
    Xie, Yicai
    Zeng, Xianhua
    Wang, Tinghua
    Yi, Yun
    Information Sciences, 2022, 608 : 1480 - 1502
  • [5] Online deep hashing for both uni-modal and cross-modal retrieval
    Xie, Yicai
    Zeng, Xianhua
    Wang, Tinghua
    Yi, Yun
    INFORMATION SCIENCES, 2022, 608 : 1480 - 1502
  • [6] Deep Cross-Modal Proxy Hashing
    Tu, Rong-Cheng
    Mao, Xian-Ling
    Tu, Rong-Xin
    Bian, Binbin
    Cai, Chengfei
    Wang, Hongfa
    Wei, Wei
    Huang, Heyan
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (07) : 6798 - 6810
  • [7] Semantic deep cross-modal hashing
    Lin, Qiubin
    Cao, Wenming
    He, Zhihai
    He, Zhiquan
    NEUROCOMPUTING, 2020, 396 (396) : 113 - 122
  • [8] Deep Lifelong Cross-Modal Hashing
    Xu, Liming
    Li, Hanqi
    Zheng, Bochuan
    Li, Weisheng
    Lv, Jiancheng
    IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34 (12) : 13478 - 13493
  • [9] Asymmetric Deep Cross-modal Hashing
    Gu, Jingzi
    Zhang, JinChao
    Lin, Zheng
    Li, Bo
    Wang, Weiping
    Meng, Dan
    COMPUTATIONAL SCIENCE - ICCS 2019, PT V, 2019, 11540 : 41 - 54
  • [10] Cross-Modal Deep Variational Hashing
    Liong, Venice Erin
    Lu, Jiwen
    Tan, Yap-Peng
    Zhou, Jie
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 4097 - 4105