Our research addresses the critical challenge of building trust in Artificial Intelligence (AI) for Clinical Decision Support Systems (CDSS), focusing on breast cancer diagnosis. It is difficult for clinicians to trust AI-generated recommendations due to a lack of explanations by the AI especially when diagnosing life threatening diseases such as breast cancer. To tackle this, we propose a dual-stage AI model combining U-Net architecture for image segmentation and Convolutional Neural Networks (CNN) for cancer prediction. This model operates on breast cancer tissue images and introduces four levels of explainability: basic classification, probability distribution, tumor localization, and advanced tumor localization with varying confidence levels. These levels are designed to offer increasing detail about diagnostic suggestions, aiming to study the effect of different explanation types on clinicians' trust in the AI system. Our methodology encompasses the development of explanation mechanisms and their application in experimental settings to evaluate their impact on enhancing clinician trust in AI. This initiative seeks to bridge the gap between AI capabilities and clinician acceptance by improving the transparency and usefulness of AI in healthcare. Ultimately, our work aims to contribute to better patient outcomes and increased efficiency in healthcare delivery by facilitating the integration of explainable AI into clinical practice.