Deep Class-Wise Hashing: Semantics-Preserving Hashing via Class-Wise Loss

被引:26
|
作者
Zhe, Xuefei [1 ]
Chen, Shifeng [2 ]
Yan, Hong [1 ]
机构
[1] City Univ Hong Kong, Dept Elect Engn, Hong Kong, Peoples R China
[2] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Measurement; Semantics; Visualization; Optimization; Binary codes; Image retrieval; Deep convolutional neural network (CNN); deep supervised hashing; large-scale image retrieval; learn to hashing; REPRESENTATION;
D O I
10.1109/TNNLS.2019.2921805
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep supervised hashing has emerged as an effective solution to large-scale semantic image retrieval problems in computer vision. Convolutional neural network-based hashing methods typically seek pairwise or triplet labels to conduct similarity-preserving learning. However, complex semantic concepts of visual contents are hard to capture by similar/dissimilar labels, which limits the retrieval performance. Generally, pairwise or triplet losses not only suffer from expensive training costs but also lack sufficient semantic information. In this paper, we propose a novel deep supervised hashing model to learn more compact class-level similarity-preserving binary codes. Our model is motivated by deep metric learning that directly takes semantic labels as supervised information in training and generates corresponding discriminant hashing code. Specifically, a novel cubic constraint loss function based on Gaussian distribution is proposed, which preserves semantic variations while penalizes the overlapping part of different classes in the embedding space. To address the discrete optimization problem introduced by binary codes, a two-step optimization strategy is proposed to provide efficient training and avoid the problem of gradient vanishing. Extensive experiments on five large-scale benchmark databases show that our model can achieve the state-of-the-art retrieval performance.
引用
收藏
页码:1681 / 1695
页数:15
相关论文
共 50 条
  • [1] Fast Class-Wise Updating for Online Hashing
    Lin, Mingbao
    Ji, Rongrong
    Sun, Xiaoshuai
    Zhang, Baochang
    Huang, Feiyue
    Tian, Yonghong
    Tao, Dacheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (05) : 2453 - 2467
  • [2] Deep top similarity hashing with class-wise loss for multi-label image retrieval
    Qin, Qibing
    Wei, Zhiqiang
    Huang, Lei
    Xie, Kezhen
    Zhang, Wenfeng
    NEUROCOMPUTING, 2021, 439 : 302 - 315
  • [3] Class-wise Deep Dictionary Learning
    Singhal, Vanika
    Khurana, Prerna
    Majumdar, Angshul
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 1125 - 1132
  • [4] Kernel class-wise locality preserving projection
    Li, Jun-Bao
    Pan, Jeng-Shyang
    Chu, Shu-Chuan
    INFORMATION SCIENCES, 2008, 178 (07) : 1825 - 1835
  • [5] Class-wise Information Gain
    Zhang, Pengtao
    Tan, Ying
    2013 INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND TECHNOLOGY (ICIST), 2013, : 972 - 978
  • [6] Class-wise Deep Dictionaries for EEG Classification
    Khurana, Prerna
    Majumdar, Angshul
    Ward, Rabab
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 3556 - 3563
  • [7] Class-wise and reduced calibration methods
    Panchenko, Michael
    Benmerzoug, Anes
    Delgado, Miguel de Benito
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1093 - 1100
  • [8] GeoHard: Towards Measuring Class-wise Hardness through Modelling Class Semantics
    Cai, Fengyu
    Zhao, Xinran
    Zhang, Hongming
    Gurevych, Iryna
    Koeppl, Heinz
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 5571 - 5597
  • [9] Constrained class-wise feature selection (CCFS)
    Hussain, Syed Fawad
    Shahzadi, Fatima
    Munir, Badre
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (10) : 3211 - 3224
  • [10] Constrained class-wise feature selection (CCFS)
    Syed Fawad Hussain
    Fatima Shahzadi
    Badre Munir
    International Journal of Machine Learning and Cybernetics, 2022, 13 : 3211 - 3224