Global-view hashing: harnessing global relations in near-duplicate video retrieval

被引:0
|
作者
Weizhen Jing
Xiushan Nie
Chaoran Cui
Xiaoming Xi
Gongping Yang
Yilong Yin
机构
[1] Shandong University,School of Computer Science and Technology
[2] Shandong University of Finance and Economics,School of Computer Science and Technology
来源
World Wide Web | 2019年 / 22卷
关键词
Video hashing; Near-duplicate video retrieval; Global view; Multi-bit learning;
D O I
暂无
中图分类号
学科分类号
摘要
Multi-view features are often used in video hashing for near-duplicate video retrieval because of their mutual assistance and complementarity. However, most methods only consider the local available information in multiple features, such as individual or pairwise structural relations, which do not fully utilize the dependent nature of multiple features. We thus propose a global-view hashing (GVH) framework to address the above-mentioned issue; our framework harnesses the global relations among samples characterized by multiple features. In the proposed framework, multiple features of all videos are jointly used to explore a common Hamming space, where the hash functions are obtained by comprehensively utilizing the relations from not only intra-view but also inter-view objects. In addition, the hash function obtained from the proposed GVH can learn multi-bit hash codes in a single iteration. Compared to existing video hashing schemes, the GVH not only globally considers the relations to obtain a more precise retrieval with short-length hash codes but also achieves multi-bit learning in a single iteration. We conduct extensive experiments on the CC_WEB_VIDEO and UQ_VIDEO datasets, and the experimental results show that our proposed method outperforms the state-of-the-art methods. As a side contribution, we will release the codes to facilitate other research.
引用
收藏
页码:771 / 789
页数:18
相关论文
共 50 条
  • [41] Real-Time Retrieval of Near-Duplicate Fragments in Images and Video-Clips
    Sluzek, Andrzej
    Paradowski, Mariusz
    [J]. ADVANCED CONCEPTS FOR INTELLIGENT VISION SYSTEMS, PT I, 2010, 6474 : 18 - +
  • [42] Scalable Near-Duplicate Video Stream Monitoring
    Chiu, Chih-Yi
    Tsai, Tsung-Han
    Hsieh, Cheng-Yu
    [J]. IEEE INTERNATIONAL SYMPOSIUM ON INTELLIGENT SIGNAL PROCESSING AND COMMUNICATIONS SYSTEMS (ISPACS 2012), 2012,
  • [43] GPU-based MapReduce for large-scale near-duplicate video retrieval
    Wang, Hanli
    Zhu, Fengkuangtian
    Xiao, Bo
    Wang, Lei
    Jiang, Yu-Gang
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2015, 74 (23) : 10515 - 10534
  • [44] GPU-based MapReduce for large-scale near-duplicate video retrieval
    Hanli Wang
    Fengkuangtian Zhu
    Bo Xiao
    Lei Wang
    Yu-Gang Jiang
    [J]. Multimedia Tools and Applications, 2015, 74 : 10515 - 10534
  • [45] NEAR-DUPLICATE VIDEO RETRIEVAL AND LOCALIZATION USING PATTERN SET BASED DYNAMIC PROGRAMMING
    Chou, Chien-Li
    Chen, Hua-Tsung
    Chen, Yi-Cheng
    Ho, Chien-Peng
    Lee, Suh-Yin
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME 2013), 2013,
  • [46] Spatial descriptor embedding for near-duplicate image retrieval
    Wang, Yunlong
    Zhou, Zhili
    [J]. INTERNATIONAL JOURNAL OF EMBEDDED SYSTEMS, 2018, 10 (03) : 241 - 247
  • [47] VISUAL PATTERN WEIGHTING FOR NEAR-DUPLICATE IMAGE RETRIEVAL
    Duan, Manni
    Xie, Xing
    Wu, Xiuqing
    Ma, Wei-Ying
    [J]. 2008 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, VOLS 1-4, 2008, : 1313 - +
  • [48] Near-Duplicate Image Retrieval Based on Contextual Descriptor
    Yao, Jinliang
    Yang, Bing
    Zhu, Qiuming
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2015, 22 (09) : 1404 - 1408
  • [49] Near-Duplicate Image Retrieval Based on Multiple Features
    Zhang, Xueqing
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (IEEE VCIP), 2018,
  • [50] An image-based near-duplicate video retrieval and localization using improved Edit distance
    Liu, Hao
    Zhao, Qingjie
    Wang, Hao
    Lv, Peng
    Chen, Yanming
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (22) : 24435 - 24456