Deep Hashing Network for Efficient Similarity Retrieval

被引:0
|
作者
Zhu, Han [1 ]
Long, Mingsheng [1 ]
Wang, Jianmin [1 ]
Cao, Yue [1 ]
机构
[1] Tsinghua Univ, Sch Software, Tsinghua Natl Lab Informat Sci & Technol, Beijing, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
QUANTIZATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the storage and retrieval efficiency, hashing has been widely deployed to approximate nearest neighbor search for large-scale multimedia retrieval. Supervised hashing, which improves the quality of hash coding by exploiting the semantic similarity on data pairs, has received increasing attention recently. For most existing supervised hashing methods for image retrieval, an image is first represented as a vector of hand-crafted or machine-learned features, followed by another separate quantization step that generates binary codes. However, suboptimal hash coding may be produced, because the quantization error is not statistically minimized and the feature representation is not optimally compatible with the binary coding. In this paper, we propose a novel Deep Hashing Network (DHN) architecture for supervised hashing, in which we jointly learn good image representation tailored to hash coding and formally control the quantization error. The DHN model constitutes four key components: (1) a sub-network with multiple convolution-pooling layers to capture image representations; (2) a fully-connected hashing layer to generate compact binary hash codes; (3) a pairwise cross-entropy loss layer for similarity-preserving learning; and (4) a pairwise quantization loss for controlling hashing quality. Extensive experiments on standard image retrieval datasets show the proposed DHN model yields substantial boosts over latest state-of-the-art hashing methods.
引用
收藏
页码:2415 / 2421
页数:7
相关论文
共 50 条
  • [21] DEEP PIECEWISE HASHING FOR EFFICIENT HAMMING SPACE RETRIEVAL
    Gu, Jingzi
    Wu, Dayan
    Fu, Peng
    Li, Bo
    Wang, Weiping
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3199 - 3203
  • [22] SUPERVISED DEEP HASHING FOR EFFICIENT AUDIO EVENT RETRIEVAL
    Jati, Arindam
    Emmanouilidou, Dimitra
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 4497 - 4501
  • [23] Deep supervised fused similarity hashing for cross-modal retrieval
    Ng W.W.Y.
    Xu Y.
    Tian X.
    Wang H.
    Multimedia Tools and Applications, 2024, 83 (39) : 86537 - 86555
  • [24] Deep spatial attention hashing network for image retrieval
    Ge, Lin-Wei
    Zhang, Jun
    Xia, Yi
    Chen, Peng
    Wang, Bing
    Zheng, Chun-Hou
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 63
  • [25] Scalable Multimedia Retrieval by Deep Learning Hashing with Relative Similarity Learning
    Gao, Lianli
    Song, Jingkuan
    Zou, Fuhao
    Zhang, Dongxiang
    Shao, Jie
    MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, 2015, : 903 - 906
  • [26] Deep semantic similarity adversarial hashing for cross-modal retrieval
    Qiang, Haopeng
    Wan, Yuan
    Xiang, Lun
    Meng, Xiaojing
    NEUROCOMPUTING, 2020, 400 : 24 - 33
  • [27] Improved Deep Classwise Hashing With Centers Similarity Learning for Image Retrieval
    Zhang, Ming
    Yan, Hong
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 10516 - 10523
  • [28] Deep multi-similarity hashing via label-guided network for cross-modal retrieval
    Wu, Lei
    Qin, Qibing
    Hou, Jinkui
    Dai, Jiangyan
    Huang, Lei
    Zhang, Wenfeng
    NEUROCOMPUTING, 2025, 616
  • [29] DEEP HASHING WITH HASH CENTER UPDATE FOR EFFICIENT IMAGE RETRIEVAL
    Jose, Abin
    Filbert, Daniel
    Rohlfing, Christian
    Ohm, Jens-Rainer
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4773 - 4777
  • [30] DEEP LEARNING BASED SUPERVISED HASHING FOR EFFICIENT IMAGE RETRIEVAL
    Viet-Anh Nguyen
    Do, Minh N.
    2016 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME), 2016,