共 50 条
MIFNet: Multi-Scale Interaction Fusion Network for Remote Sensing Image Change Detection
被引:0
|作者:
Xie, Weiying
[1
]
Shao, Wenjie
[1
]
Li, Daixun
[1
]
Li, Yunsong
[1
]
Fang, Leyuan
[2
]
机构:
[1] Xidian Univ, State Key Lab Integrated Serv Networks, Xidian 710071, Peoples R China
[2] Hunan Univ, Coll Elect & Informat Engn, Changsha 410082, Peoples R China
基金:
中国国家自然科学基金;
关键词:
Feature extraction;
Remote sensing;
Data mining;
Semantics;
Attention mechanisms;
Transformers;
Cross layer design;
Circuits and systems;
Accuracy;
Fuses;
Change detection;
remote sensing;
attention;
convolutional neural networks;
multi-scale;
D O I:
10.1109/TCSVT.2024.3494820
中图分类号:
TM [电工技术];
TN [电子技术、通信技术];
学科分类号:
0808 ;
0809 ;
摘要:
Change Detection (CD) is a crucial and challenging task in remote sensing observations. Despite the remarkable progress driven by deep learning in remote sensing change detection, several challenges remain regarding global information representation and efficient interaction. The traditional Siamese network structure, which extracts features from bitemporal images using a weight-sharing network and generates a change map, but often neglects phase interaction information between images. Additionally, multi-scale feature fusion methods frequently use FPN-like structures, leading to lossy cross-layer information transmission and hindering the effective utilization of features. To address these issues, we propose a multi-scale interaction fusion network (MIFNet) that fuses bitemporal features at an early stage, using deep supervision techniques to guide early fusion features in obtaining abundant semantic representation of changes, also we construct a dual complementary attention module (DCA) to capture temporal information. Furthermore, we introduce a collection-allocation fusion mechanism, which is different from previous layer-by-layer fusion methods since it collects global information and embeds features at different levels to achieve effective cross-layer information transmission and promote global semantic feature representation. Extensive experiments demonstrate that our method achieves competitive results on the LEVIR-CD+ dataset, outperforming other advanced methods on both the LEVIR-CD and SYSU-CD datasets, with F1 improved by 0.96% and 0.61%, respectively, compared to the most advanced models.
引用
收藏
页码:2725 / 2739
页数:15
相关论文