Deep online cross-modal hashing by a co-training mechanism

被引:5
|
作者
Xie, Yicai [1 ,2 ]
Zeng, Xianhua [1 ]
Wang, Tinghua [2 ]
Yi, Yun [2 ]
Xu, Liming [3 ]
机构
[1] Chongqing Univ Posts & Telecommun, Coll Comp Sci & Technol, Chongqing Key Lab Image Cognit, Chongqing 400065, Peoples R China
[2] Gannan Normal Univ, Sch Math & Comp Sci, Ganzhou 341000, Jiangxi, Peoples R China
[3] China West Normal Univ, Sch Comp Sci, Nanchong 637002, Sichuan, Peoples R China
基金
中国国家自然科学基金;
关键词
Online hashing; Cross -modal retrieval; Online learning; Knowledge distillation; Deep learning;
D O I
10.1016/j.knosys.2022.109888
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Batch-based cross-modal hashing retrieval methods have made great progress. However, they could not be applied in scenarios where new data continuously arrives in a stream. To this end, a few online cross-modal hashing retrieval methods have been proposed. However, they are based on shallow models, which may result in a suboptimal retrieval performance. Therefore, we propose Deep Online Cross-modal Hashing by a Co-training Mechanism (DOCHCM), which introduces deep learning to online cross-modal hashing by cooperatively training two subnetworks with two stages. DOCHCM addresses two problematics aspects. First, in each round, the image sub-network incrementally learns the hash codes of the current chunk of images by preserving the semantic similarity between their output features and the hash codes of the whole texts. The text sub-network incrementally learns the hash codes of the current chunk of texts by preserving the semantic similarity between their output features and the hash codes of the whole images. Second, knowledge distillation is leveraged to the image and text sub-networks to avoid catastrophic forgetting, which enables the two sub-networks not only to learn new knowledge but also to prevent the forgetting of the old knowledge. Extensive experiments on three benchmark datasets demonstrate that DOCHCM outperforms the state-of-the-art cross-modal hashing retrieval methods.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Flexible Cross-Modal Hashing
    Yu, Guoxian
    Liu, Xuanwu
    Wang, Jun
    Domeniconi, Carlotta
    Zhang, Xiangliang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (01) : 304 - 314
  • [42] Discriminant Cross-modal Hashing
    Xu, Xing
    Shen, Fumin
    Yang, Yang
    Shen, Heng Tao
    ICMR'16: PROCEEDINGS OF THE 2016 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2016, : 305 - 308
  • [43] Extensible Cross-Modal Hashing
    Chen, Tian-yi
    Zhang, Lan
    Zhang, Shi-cong
    Li, Zi-long
    Huang, Bai-chuan
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2109 - 2115
  • [44] Cross-Modal Discrete Hashing
    Liong, Venice Erin
    Lu, Jiwen
    Tan, Yap-Peng
    PATTERN RECOGNITION, 2018, 79 : 114 - 129
  • [45] Continuous cross-modal hashing
    Zheng, Hao
    Wang, Jinbao
    Zhen, Xiantong
    Song, Jingkuan
    Zheng, Feng
    Lu, Ke
    Qi, Guo-Jun
    PATTERN RECOGNITION, 2023, 142
  • [46] Cross-Modal Hamming Hashing
    Cao, Yue
    Liu, Bin
    Long, Mingsheng
    Wang, Jianmin
    COMPUTER VISION - ECCV 2018, PT I, 2018, 11205 : 207 - 223
  • [47] Unsupervised Deep Cross-Modal Hashing by Knowledge Distillation for Large-scale Cross-modal Retrieval
    Li, Mingyong
    Wang, Hongya
    PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR '21), 2021, : 183 - 191
  • [48] Deep Discrete Cross-Modal Hashing for Cross-Media Retrieval
    Zhong, Fangming
    Chen, Zhikui
    Min, Geyong
    PATTERN RECOGNITION, 2018, 83 : 64 - 77
  • [49] Deep Cross-Modal Hashing Based on Semantic Consistent Ranking
    Liu, Xiaoqing
    Zeng, Huanqiang
    Shi, Yifan
    Zhu, Jianqing
    Hsia, Chih-Hsien
    Ma, Kai-Kuang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 9530 - 9542
  • [50] Deep Unsupervised Momentum Contrastive Hashing for Cross-modal Retrieval
    Lu, Kangkang
    Yu, Yanhua
    Liang, Meiyu
    Zhang, Min
    Cao, Xiaowen
    Zhao, Zehua
    Yin, Mengran
    Xue, Zhe
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 126 - 131