Infrared and visible image fusion aims to reconstruct fused images with comprehensive visual information by merging the complementary features of source images captured by different imaging sensors. This tech-nology has been widely used in civil and military fields, such as urban security monitoring, remote sensing measurement, and battlefield reconnaissance. However, the existing methods still suffer from the preset fusion strategies that cannot be adjustable to different fusion demands and the loss of information during the fea -ture propagation process, thereby leading to the poor generalization ability and limited fusion performance. Therefore, we propose an unsupervised end-to-end network with learnable fusion strategy for infrared and visible image fusion in this paper. The presented network mainly consists of three parts, including the fea -ture extraction module, the fusion strategy module, and the image reconstruction module. First, in order to preserve more information during the process of feature propagation, dense connections and residual con-nections are applied to the feature extraction module and the image reconstruction module, respectively. Second, a new convolutional neural network is designed to adaptively learn the fusion strategy, which is able to enhance the generalization ability of our algorithm. Third, due to the lack of ground truth in fusion tasks, a loss function that consists of saliency loss and detail loss is exploited to guide the training direction and bal-ance the retention of different types of information. Finally, the experimental results verify that the proposed algorithm delivers competitive performance when compared with several state-of-the-art algorithms in terms of both subjective and objective evaluations. Our codes are available at https://github.com/ MinjieWan/Unsupervised-end-to-end-infrared-and-visible-image-fusion-network-using-learnable-fusion-strategy. (c) 2022 Optica Publishing Group