Deep learning is a modeling method based on neural network, which is constructed of multiple different functional perception layers and optimized by learning the inherent regularity of a large amount of data to achieve end-to-end modeling. The growth of data and the improvement of computing power promoted the applications of deep learning in spectral and medical image analysis. The lack of interpretability of the constructed models, however, constitutes an obstacle to their further development and applications. To overcome this obstacle of deep learning, various interpretability methods are proposed. According to different principles of explanation, interpretability methods are divided into three categories: visualization methods, model distillation, and interpretable models. Visualization methods and model distillation belong to external algorithms, which interpret a model without changing its structure, while interpretable models aim to make the model structure interpretable. In this review, the principles of deep learning and three interpretability methods are introduced from the perspective of algorithms. Moreover, the applications of the interpretability methods in spectral and medical image analysis in the past three years are summarized. In most studies, external algorithms were developed to make the models explainable, and these methods were found to be able to provide reasonable explanation for the abilities of the deep learning models. However, few studies attempt to construct interpretable algorithms within networks. Furthermore, most studies try to train the model through collecting large amounts of labeled data, which leads to huge costs in both labor and expenses. Therefore, training strategies with small data sets, approaches to enhance the interpretability of models, and the construction of interpretable deep learning architectures are still required in future work