Infrared visible image fusion plays a central role in multimodal image fusion. By integrating feature information, we obtain more comprehensive and richer visual data to enhance image quality. However, current image fusion methods often rely on intricate networks to extract parameters from multimodal source images, making it challenging to leverage valuable information for high-quality fusion results completely. In this research, we propose a Poolformer-convolutional neural network (CNN) dual-branch feature extraction fusion network for the fusion of infrared and visible images, termed PFCFuse. This network fully exploits key features in the images and adaptively preserves critical features in the images. To begin with, we provide a feature extractor with a dual-branch poolformer-CNN, using poolformer blocks to extract low-frequency global information, where the basic spatial pooling procedures are used as a substitute for the attention module of the transformer. Second, the model is designed with an adaptively adjusted a-Huber loss, which can stably adjust model parameters and reduce the influence of outliers on model predictions, thereby enhancing the model's robustness while maintaining precision. Compared with state-of-the-art fusion models such as U2Fusion, RFNet, TarDAL, and CDDFuse, we obtain excellent experimental results in both qualitative and quantitative experiments. Compared to the latest dual-branch feature extraction, CDDFuse, our model parameters are reduced by half. The code is available at https://github.com/HXY13/PFCFuse-Image-Fusion.