ContextDesc: Local Descriptor Augmentation with Cross-Modality Context

被引:136
|
作者
Luo, Zixin [1 ]
Shen, Tianwei [1 ]
Zhou, Lei [1 ]
Zhang, Jiahui [2 ]
Yao, Yao [1 ]
Li, Shiwei [1 ]
Fang, Tian [3 ]
Quan, Long [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
[2] Tsinghua Univ, Beijing, Peoples R China
[3] Shenzhen Zhuke Innovat Technol Altizure, Shenzhen, Peoples R China
关键词
D O I
10.1109/CVPR.2019.00263
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most existing studies on learning local features focus on the patch-based descriptions of individual keypoints, whereas neglecting the spatial relations established from their keypoint locations. In this paper, we go beyond the local detail representation by introducing context awareness to augment off-the-shelf local feature descriptors. Specifically, we propose a unified learning framework that leverages and aggregates the cross-modality contextual information, including (i) visual context from high-level image representation, and (ii) geometric context from 2D keypoint distribution. Moreover, we propose an effective N-pair loss that eschews the empirical hyper-parameter search and improves the convergence. The proposed augmentation scheme is lightweight compared with the raw local feature description, meanwhile improves remarkably on several large-scale benchmarks with diversified scenes, which demonstrates both strong practicality and generalization ability in geometric matching applications.
引用
收藏
页码:2522 / 2531
页数:10
相关论文
共 50 条
  • [1] DLFace: Deep local descriptor for cross-modality face recognition
    Peng, Chunlei
    Wang, Nannan
    Li, Jie
    Gao, Xinbo
    [J]. PATTERN RECOGNITION, 2019, 90 : 161 - 171
  • [2] Diverse data augmentation for learning image segmentation with cross-modality annotations
    Chen, Xu
    Lian, Chunfeng
    Wang, Li
    Deng, Hannah
    Kuang, Tianshu
    Fung, Steve H.
    Gateno, Jaime
    Shen, Dinggang
    Xia, James J.
    Yap, Pew-Thian
    [J]. MEDICAL IMAGE ANALYSIS, 2021, 71
  • [3] CROSS-MODALITY MATCHING
    AUERBACH, C
    [J]. QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 1973, 25 (NOV): : 492 - 495
  • [4] The Congruency Sequence Effect of the Simon Task in a Cross-Modality Context
    Lee, Yoon Seo
    Cho, Yang Seok
    [J]. JOURNAL OF EXPERIMENTAL PSYCHOLOGY-HUMAN PERCEPTION AND PERFORMANCE, 2023, 49 (09) : 1221 - 1235
  • [5] LOCAL CROSS-MODALITY IMAGE ALIGNMENT USING UNSUPERVISED LEARNING
    BERNANDER, O
    KOCH, C
    [J]. LECTURE NOTES IN COMPUTER SCIENCE, 1990, 427 : 573 - 575
  • [6] Cross-Modality Esophagus Segmentation Using Physics-Based Data Augmentation
    Alam, S.
    Li, T.
    Zhang, S.
    Lee, D.
    Zhang, P.
    Nadeem, S.
    [J]. MEDICAL PHYSICS, 2020, 47 (06) : E522 - E523
  • [7] CONTEXT HABITUATION AND TASTE NEOPHOBIA - EVIDENCE FOR A CROSS-MODALITY CONTRAST EFFECT
    MITCHELL, D
    YIN, M
    NAKAMATSU, K
    [J]. BEHAVIORAL AND NEURAL BIOLOGY, 1980, 29 (01): : 117 - 122
  • [9] Cross-Modality Personalization for Retrieval
    Murrugarra-Llerena, Nils
    Kovashka, Adriana
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6422 - 6431
  • [10] STRATEGIES IN CROSS-MODALITY MATCHING
    MILEWSKI, AE
    IACCINO, J
    [J]. PERCEPTION & PSYCHOPHYSICS, 1982, 31 (03): : 273 - 275