The integration of deep learning (DL) into co-clinical applications has generated substantial interest among researchers aiming to enhance clinical decision support systems for various aspects of disease management, including detection, prediction, diagnosis, treatment, and therapy. However, the inherent opacity of DL methods has raised concerns within the healthcare community, particularly in high-risk or complex medical domains. There exists a significant gap in research and understanding when it comes to elucidating and rendering transparent the inner workings of DL models applied to the analysis of medical images. While explainable artificial intelligence (XAI) has gained ground in diverse fields, including healthcare, numerous unexplored facets remain within the realm of medical imaging. To better understand the complexities of DL techniques, there is an urgent need for rapid advancement in the field of eXplainable DL (XDL) or eXplainable Artificial Intelligence (XAI). This would empower healthcare professionals to comprehend, assess, and contribute to decision-making processes before taking any actions. This viewpoint article conducts an extensive review of XAI and XDL, shedding light on methods for unveiling the "black-box" nature of DL. Additionally, it explores the adaptability of techniques originally designed for solving problems across diverse domains for addressing healthcare challenges. The article also delves into how physicians can interpret and comprehend data-driven technologies effectively. This comprehensive literature review serves as a valuable resource for scientists and medical practitioners, offering insights into both technical and clinical aspects. It assists in identifying methods to make XAI and XDL models more comprehensible, enabling wise model choices based on particular requirements and goals.