Image fusion plays a pivotal role in numerous high-level computer vision tasks. Existing deep learning -based image fusion methods usually leverage an implicit manner to achieve feature extraction, which would cause some characteristics of source images, e.g., contrast and structural information, are unable to be fully extracted and integrated into the fused images. In this work, we propose an infrared and visible image fusion method via parallel scene and texture learning. Our key objective is to deploy two branches of deep neural networks, namely the content branch and detail branch, to synchronously extract different characteristics from source images and then reconstruct the fused image. The content branch fo-cuses primarily on coarse-grained information and is deployed to estimate the global content of source images. The detail branch primarily pays attention to fine-grained information, and we design an omni-directional spatially variant recurrent neural networks in this branch to model the internal structure of source images more accurately and extract texture-related features in an explicit manner. Extensive ex-periments show that our approach achieves significant improvements over state-of-the-arts on qualita-tive and quantitative evaluations with comparatively less running time consumption. Meanwhile, we also demonstrate the superiority of our fused results in the object detection task. Our code is available at: https://github.com/Melon-Xu/PSTLFusion .(c) 2022 Elsevier Ltd. All rights reserved.