Change Detection (CD) is a crucial and challenging task in remote sensing observations. Despite the remarkable progress driven by deep learning in remote sensing change detection, several challenges remain regarding global information representation and efficient interaction. The traditional Siamese network structure, which extracts features from bitemporal images using a weight-sharing network and generates a change map, but often neglects phase interaction information between images. Additionally, multi-scale feature fusion methods frequently use FPN-like structures, leading to lossy cross-layer information transmission and hindering the effective utilization of features. To address these issues, we propose a multi-scale interaction fusion network (MIFNet) that fuses bitemporal features at an early stage, using deep supervision techniques to guide early fusion features in obtaining abundant semantic representation of changes, also we construct a dual complementary attention module (DCA) to capture temporal information. Furthermore, we introduce a collection-allocation fusion mechanism, which is different from previous layer-by-layer fusion methods since it collects global information and embeds features at different levels to achieve effective cross-layer information transmission and promote global semantic feature representation. Extensive experiments demonstrate that our method achieves competitive results on the LEVIR-CD+ dataset, outperforming other advanced methods on both the LEVIR-CD and SYSU-CD datasets, with F1 improved by 0.96% and 0.61%, respectively, compared to the most advanced models.