The purpose of infrared and visible fusion is to encompass significant targets and abundant texture details in multiple visual scenarios. However, existing fusion methods have not effectively addressed multiple visual scenarios including small objects, multiple objects, noise, low light, light pollution, overexposure and so on. To better adapt to multiple visual scenarios, we propose a general infrared and visible image fusion method based on saliency weight, termed as MVSFusion. Initially, we use SVM (Support Vector Machine) to classify visible images into two categories based on lighting conditions: Low-Light visible images and Brightly Lit visible images. Designing fusion rules according to distinct lighting conditions ensures adaptability to multiple visual scenarios. Our designed saliency weights guarantee saliency for both small and multiple objects across different scenes. On the other hand, we propose a new texture detail fusion method and an adaptive brightness enhancement technique to better address multiple visual scenarios such as noise, light pollution, nighttime, and overexposure. Extensive experiments indicate that MVSFusion excels not only in visual quality and quantitative evaluation compared to state-of-the-art algorithms but also provides advantageous support for high-level visual tasks. Our code is publicly available at: https://github.com/VCMHE/MVSFusion.