Leveraging Deep Features Enhance and Semantic-Preserving Hashing for Image Retrieval

被引:1
|
作者
Zhao, Xusheng [1 ]
Liu, Jinglei [1 ]
机构
[1] Yantai Univ, Sch Comp & Control Engn, Yantai 264000, Peoples R China
基金
中国国家自然科学基金;
关键词
image retrieval; deep hashing; convolutional neural networks; contrastive loss function; binary codes; BACKPROPAGATION;
D O I
10.3390/electronics11152391
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The hash method can convert high-dimensional data into simple binary code, which has the advantages of fast speed and small storage capacity in large-scale image retrieval and is gradually being favored by an increasing number of people. However, the traditional hash method has two common shortcomings, which affect the accuracy of image retrieval. First, most of the traditional hash methods extract many irrelevant image features, resulting in partial information bias in the binary code produced by the hash method. Furthermore, the binary code made by the traditional hash method cannot maintain the semantic similarity of the image. To find solutions to these two problems, we try a new network architecture that adds a feature enhancement layer to better extract image features, remove redundant features, and express the similarity between images through contrastive loss, thereby constructing compact exact binary code. In summary, we use the relationship between labels and image features to model them, better preserve the semantic relationship and reduce redundant features, and use a contrastive loss to compare the similarity between images, using a balance loss to produce the resulting binary code. The numbers of 0s and 1s are balanced, resulting in a more compact binary code. Extensive experiments on three commonly used datasets-CIFAR-10, NUS-WIDE, and SVHN-display that our approach (DFEH) can express good performance compared with the other most advanced approaches.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Enhanced Deep Discrete Hashing with semantic-visual similarity for image retrieval
    Yang, Zhan
    Yang, Liu
    Huang, Wenti
    Sun, Longzhi
    Long, Jun
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2021, 58 (05)
  • [32] Fusing Semantic Prior Based Deep Hashing Method for Fuzzy Image Retrieval
    Gong, Xiaolong
    Huang, Linpeng
    Wang, Fuwei
    [J]. PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2018, 11012 : 402 - 415
  • [33] Deep Semantic Ranking Based Hashing for Multi-Label Image Retrieval
    Zhao, Fang
    Huang, Yongzhen
    Wang, Liang
    Tan, Tieniu
    [J]. 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 1556 - 1564
  • [34] Unsupervised Deep Multi-Similarity Hashing With Semantic Structure for Image Retrieval
    Qin, Qibing
    Huang, Lei
    Wei, Zhiqiang
    Xie, Kezhen
    Zhang, Wenfeng
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (07) : 2852 - 2865
  • [35] Deep Semantic Reconstruction Hashing for Similarity Retrieval
    Wang, Yunbo
    Ou, Xianfeng
    Liang, Jian
    Sun, Zhenan
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (01) : 387 - 400
  • [36] Latent Semantic Minimal Hashing for Image Retrieval
    Lu, Xiaoqiang
    Zheng, Xiangtao
    Li, Xuelong
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (01) : 355 - 368
  • [37] Integration of semantic and visual hashing for image retrieval
    Zhu, Songhao
    Jin, Dongliang
    Liang, Zhiwei
    Wang, Qiang
    Sun, Yajie
    Xu, Guozheng
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2017, 44 : 229 - 235
  • [38] Deep Transfer Hashing for Image Retrieval
    Zhai, Hongjia
    Lai, Shenqi
    Jin, Hanyang
    Qian, Xueming
    Mei, Tao
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (02) : 742 - 753
  • [39] Deep forest hashing for image retrieval
    Zhou, Meng
    Zeng, Xianhua
    Chen, Aozhu
    [J]. PATTERN RECOGNITION, 2019, 95 : 114 - 127
  • [40] Hierarchical deep hashing for image retrieval
    Ge Song
    Xiaoyang Tan
    [J]. Frontiers of Computer Science, 2017, 11 : 253 - 265