The super-resolution algorithms based on deep learning can effectively increase optical remote sensing image (ORSI) details for further analysis tasks. Deep unfolding methods have been studied in recent years to bridge the gap between optimization-based and learning-based methods. However, these unfolding methods usually ignore the utilization of intermediate network features between different iteration stages, thereby limiting the performance of super-resolution results. We propose a multi-source information fusion network (MSFNet) for ORSI super-resolution to address this problem. We mainly consider three strategies to enhance the image super-resolution performance, including feature extraction strategy, information fusion strategy, and the structure of the unfolding network. Firstly, image information of various scales is helpful for mining potential features of images for image super-resolution. Therefore, we introduce multi-scale implicit constraints to the objective function. Secondly, we unfold the optimization process into a neural network by alternating direction method of multipliers (ADMM). This unfolding strategy can effectively utilize the prior information for image reconstruction. Thirdly, we propose a row-column decoupling Transformer module for feature fusion. Specifically, the row Transformer block completes the feature fusion of various scales, and the column Transformer block completes the feature fusion of various channels. The fused features are transmitted to the next iteration stage for feature enhancement. We perform experiments on three remote sensing image datasets to fully demonstrate the algorithm's effectiveness. Experiment results show that the proposed algorithm can achieve better image reconstruction performance.