Deep Weibull hashing with maximum mean discrepancy quantization for image retrieval

被引:8
|
作者
Feng, Hao [1 ]
Wang, Nian [1 ]
Tang, Jun [1 ]
机构
[1] Anhui Univ, Sch Elect & Informat Engn, Hefei 230601, Anhui, Peoples R China
基金
国家重点研发计划;
关键词
Image retrieval; Hamming space; Hashing; Maximum mean discrepancy; REPRESENTATION; CODES;
D O I
10.1016/j.neucom.2021.08.090
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hashing has been a promising technology for fast nearest neighbor retrieval in large-scale datasets due to the low storage cost and fast retrieval speed. Most existing deep hashing approaches learn compact hash codes through pair-based deep metric learning such as the triplet loss. However, these methods often consider that the intra-class and inter-class similarity make the same contribution, and consequently it is difficult to assign larger weights for informative samples during the training procedure. Furthermore, only imposing relative distance constraint increases the possibility of being clustered with larger average intra-class distance for similar pairs, which is harmful to learning a high separability Hamming space. To tackle the issues, we put forward deep Weibull hashing with maximum mean discrepancy quantization (DWH), which jointly performs neighborhood structure optimization and error-minimizing quantization to learn high-quality hash codes in a unified framework. Specifically, DWH learns the desired neighborhood structure in conjunction with a flexible pair similarity optimization strategy and a Weibull distribution-based constraint between anchors and their neighbors in Hamming space. More importantly, we design a maximum mean discrepancy quantization objective function to preserve the pairwise similarity when performing binary quantization. Besides, a class-level loss is introduced to mine the semantic structural information of images by using supervision information. The encouraging experimental results on various benchmark datasets demonstrate the efficacy of the proposed DWH. (C) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页码:95 / 106
页数:12
相关论文
共 50 条
  • [31] Deep Multi-Label Hashing for Image Retrieval
    Zhong, Xian
    Li, Jiachen
    Huang, Wenxin
    Xie, Liang
    2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 1245 - 1251
  • [32] Deep Top Similarity Preserving Hashing for Image Retrieval
    Li, Qiang
    Fu, Haiyan
    Kong, Xiangwei
    IMAGE AND GRAPHICS (ICIG 2017), PT II, 2017, 10667 : 206 - 215
  • [33] Unsupervised deep hashing with node representation for image retrieval
    Wang, Yangtao
    Song, Jingkuan
    Zhou, Ke
    Liu, Yu
    Liu, Yu (liu_yu@hust.edu.cn), 1600, Elsevier Ltd (112):
  • [34] A novel deep hashing method for fast image retrieval
    Cheng, Shuli
    Lai, Huicheng
    Wang, Liejun
    Qin, Jiwei
    VISUAL COMPUTER, 2019, 35 (09): : 1255 - 1266
  • [35] An Efficient Supervised Deep Hashing Method for Image Retrieval
    Hussain, Abid
    Li, Heng-Chao
    Ali, Muqadar
    Wali, Samad
    Hussain, Mehboob
    Rehman, Amir
    ENTROPY, 2022, 24 (10)
  • [36] A novel deep hashing method for fast image retrieval
    Shuli Cheng
    Huicheng Lai
    Liejun Wang
    Jiwei Qin
    The Visual Computer, 2019, 35 : 1255 - 1266
  • [37] Deep Hashing for Large-scale Image Retrieval
    Li Mengting
    Liu Jun
    PROCEEDINGS OF THE 36TH CHINESE CONTROL CONFERENCE (CCC 2017), 2017, : 10940 - 10944
  • [38] Deep Self-Adaptive Hashing for Image Retrieval
    Lin, Qinghong
    Chen, Xiaojun
    Zhang, Qin
    Tian, Shangxuan
    Chen, Yudong
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 1028 - 1037
  • [39] Deep collaborative graph hashing for discriminative image retrieval
    Zhang, Zheng
    Wang, Jianning
    Zhu, Lei
    Luo, Yadan
    Lu, Guangming
    PATTERN RECOGNITION, 2023, 139
  • [40] Deep internally connected transformer hashing for image retrieval
    Chao, Zijian
    Cheng, Shuli
    Li, Yongming
    KNOWLEDGE-BASED SYSTEMS, 2023, 279