DOMAIN DISTRIBUTION ALIGNMENT FOR BOOSTING MULTI-MODAL REMOTE SENSING IMAGE MATCHING

被引:0
|
作者
Wang, Zhe [1 ]
Quan, Dou [1 ]
Lv, Chonghua [1 ]
Guo, Yanhe [1 ]
Wang, Shuang [1 ]
Gu, Yu [1 ]
Jiao, Licheng [1 ]
机构
[1] Xidian Univ, Sch Artificial Intelligence, Xian, Shaanxi, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
image patch matching; multi-modal images; content difference; domain distribution; descriptor learning;
D O I
10.1109/IGARSS52108.2023.10282123
中图分类号
P [天文学、地球科学];
学科分类号
07 ;
摘要
Multi-modal images can obtain complementary and rich information images, which are more widely used in various applications. However, due to the different imaging mechanisms of different sensors, there are significant domain distribution differences between multi-modal images. In multi-modal image matching, existing deep learning methods should deal with the image content difference caused by rotation transformation and the domain distribution difference caused by different sensors, which are very difficult for the deep network. To address this issue, we propose to combine an instance comparison and a batch comparison to deal with image content differences and domain distribution differences, respectively. We design a new domain distribution alignment method to explicitly constrain the sample domain distribution of the multi-modal images are consistent through the domain distribution alignment loss. Extensive multi-modal remote sensing image patch matching experiments have shown the effectiveness of the proposed method. Furthermore, the proposed multi-modal domain distribution alignment method has more obvious advantages when there are significant content differences and distribution differences.
引用
收藏
页码:6065 / 6068
页数:4
相关论文
共 50 条
  • [21] Vehicle detection method based on remote sensing image fusion of superpixel and multi-modal sensing network
    Lian Y.
    Li G.
    Shen S.
    Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2023, 31 (06): : 905 - 919
  • [22] TRANSFORMER-BASED MULTI-MODAL LEARNING FOR MULTI-LABEL REMOTE SENSING IMAGE CLASSIFICATION
    Hoffmann, David Sebastian
    Clasen, Kai Norman
    Demir, Begum
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 4891 - 4894
  • [23] Ticino: A multi-modal remote sensing dataset for semantic segmentation
    Barbato, Mirko Paolo
    Piccoli, Flavio
    Napoletano, Paolo
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 249
  • [24] Multi-modal image matching based on local frequency information
    Xiaochun Liu
    Zhihui Lei
    Qifeng Yu
    Xiaohu Zhang
    Yang Shang
    Wang Hou
    EURASIP Journal on Advances in Signal Processing, 2013
  • [25] Multiscale structural feature transform for multi-modal image matching
    Hu, Maoqing
    Sun, Bin
    Kang, Xudong
    Li, Shutao
    INFORMATION FUSION, 2023, 95 : 341 - 354
  • [26] Multi-modal image matching based on local frequency information
    Liu, Xiaochun
    Lei, Zhihui
    Yu, Qifeng
    Zhang, Xiaohu
    Shang, Yang
    Hou, Wang
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, 2013,
  • [27] The Gixel Array Descriptor (GAD) for Multi-Modal Image Matching
    Pang, Guan
    Neumann, Ulrich
    2013 IEEE WORKSHOP ON APPLICATIONS OF COMPUTER VISION (WACV), 2013, : 497 - 504
  • [28] An Image-Text Matching Method for Multi-Modal Robots
    Zheng, Ke
    Li, Zhou
    JOURNAL OF ORGANIZATIONAL AND END USER COMPUTING, 2024, 36 (01)
  • [29] Based on Multi-Feature Information Attention Fusion for Multi-Modal Remote Sensing Image Semantic Segmentation
    Zhang, Chongyu
    2021 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (IEEE ICMA 2021), 2021, : 71 - 76
  • [30] Multi-Stage Fusion and Multi-Source Attention Network for Multi-Modal Remote Sensing Image Segmentation
    Zhao, Jiaqi
    Zhou, Yong
    Shi, Boyu
    Yang, Jingsong
    Zhang, Di
    Yao, Rui
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2021, 12 (06)