Multimodal medical image fusion (MMIF) technology aims to generate fused images that comprehensively reflect the information of tissues, organs, and metabolism, thereby assisting medical diagnosis and enhancing the reliability of clinical diagnosis. However, most approaches suffer from information loss during feature extraction and fusion, and rarely explore how to directly process multichannel data. To address the above problems, this paper proposes a novel invertible fusion network (MMIF-INet) that accepts three-channel color images as inputs to the model and generates multichannel data distributions in a process-reversible manner. Specifically, the discrete wavelet transform (DWT) is utilized for downsampling, aiming to decompose the source image pair into high- and low-frequency components. Concurrently, an invertible block (IB) facilitates preliminary feature fusion, enabling the integration of cross-domain complementary information and multisource aggregation in an information-lossless manner. The combination of IB and DWT ensures the initial fusion's reversibility and the extraction of semantic features across various scales. To accommodate fusion tasks, a multiscale fusion module is employed, integrating diverse components from different modalities and multiscale features. Finally, a hybrid loss is designed to constrain model training from the perspectives of structure, gradient, intensity, and chromaticity, thus enabling effective retention of the luminance, color, and detailed information of the source images. Experiments on multiple medical datasets demonstrate that MMIF-INet outperforms existing methods in visual quality, quantitative metrics, and fusion efficiency, particularly in color fidelity. Extended to infrared-visible image fusion, seven optimal evaluation criteria further substantiate MMIF-INet's superior fusion performance. The code of MMIF-INet is available at https://github.com/HeDan-11/MMIF-INet.