Accurate remote sensing change detection (RSCD) tasks rely on comprehensively processing multiscale information from local details to effectively integrate global dependencies. Hybrid models based on convolutional neural networks (CNNs) and Transformers have become mainstream approaches in RSCD due to their complementary advantages in local feature extraction and long-term dependency modeling. However, the Transformer faces application bottlenecks due to the high secondary complexity of its attention mechanism. In recent years, state-space models (SSMs) with efficient hardware-aware design, represented by Mamba, have gained widespread attention for their excellent performance in long-series modeling and have demonstrated significant advantages in terms of improved accuracy, reduced memory consumption, and reduced computational cost. Based on the high match between the efficiency of SSM in long sequence data processing and the requirements of the RSCD task, this study explores the potential of its application in the RSCD task. However, relying on SSM alone is insufficient in recognizing fine-grained features in remote sensing images. To this end, we propose a novel hybrid architecture, ConMamba, which constructs a high-performance hybrid encoder (CS-Hybridizer) by realizing the deep integration of the CNN and SSM through the feature interaction module (FIM). In addition, we introduce the spatial integration module (SIM) in the feature reconstruction stage to further enhance the model's ability to integrate complex contextual information. Extensive experimental results on three publicly available RSCD datasets show that ConMamba significantly outperforms existing techniques in several performance metrics, validating the effectiveness and foresight of the hybrid architecture based on the CNN and SSM in RSCD.