This paper proposes an Efficient Fused Convolution Neural Network (EFCNN) for feature-level fusion of medical images. The proposed network architecture leverages the strengths of both deep Convolution Neural Networks (CNNs) and fusion techniques to achieve improved efficiency in medical image fusion. Image fusion of CT and MRI images can help medical professionals to make more informed diagnosis, plan more effective treatments, and ultimately improve patient outcomes. Recently many researchers are working to develop efficient medical fusion technique. To contribute to this field, authors have attempted to fuse images at feature level using Bilinear Activation Function (BAM) for feature extraction and softmax based Soft Attention (SA) fusion rule for fusion. The EFCNN model uses a two-stream CNN architecture to process input images, which are then fused at the feature level using an attention mechanism. The proposed approach is evaluated on Whole Brain Atlas Harvard dataset. The EFCNN model demonstrated superior performance in various performance indices, including ISSIM, MI, and PSNR, with respective values of 0.41, 4.42, and 57.21 when SA was utilized. Furthermore, the proposed model exhibited favourable performance in terms of Spatial Frequency, Average Gradient, and Edge-intensity, with corresponding values of 57.3, 16.83, and 157.72 on a medical dataset when EFCNN was applied without SA fusion. However, subjective evaluation indicated that images were improved with SA fusion. These results indicate that the EFCNN model surpasses state-of-the-art methods. An exhaustive ablation study was conducted to investigate the efficacy of the proposed model, which further confirmed its accuracy. The significance of this work is its potential implications for medical diagnosis and treatment planning, where precise and efficient image analysis is crucial.