共 44 条
- [1] Been K, Martin W, Justin G, Et al., Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV)[C], Proc of the 35th Int Conf on Machine Learning (ICML), pp. 2668-2677, (2018)
- [2] Amirata G, James W, James Y Z, Et al., Towards automatic concept-based explanations[C], Proc of the Conf on Advances in Neural Information Processing Systems (NeurIPS), pp. 9273-9282, (2019)
- [3] Zhang Ruihan, Prashan M, Tim M, Et al., Invertible concept-based explanations for CNN models with non-negative concept activation vectors[C], Proc of the AAAI Conf on Artificial Intelligence, pp. 11682-11690, (2021)
- [4] Zhi Chen, Yijie Bei, Cynthia R., Concept whitening for interpretable image recognition[J], Nature Machine Intelligence, 2, 12, pp. 772-782, (2020)
- [5] Wang Jiaqi, Liu Huafeng, Wang Xinyue, Et al., Interpretable image recognition by constructing transparent embedding space[C], Proc of the IEEE Int Conf on Computer Vision (ICCV), pp. 875-884, (2021)
- [6] Jon D, Alina J B, Chen Chaofan, Deformable ProtoPNet: An interpretable image classifier using deformable prototypes, Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition (CVPR), pp. 10255-10265, (2022)
- [7] Francesco B, Fosca G, Riccardo G, Et al., Benchmarking and survey of explanation methods for black box models, (2021)
- [8] Shouling Ji, Jinfeng Li, Tianyu Du, Et al., A survey of interpretability methods, applications and security of machine learning models[J], Journal of Computer Research and Development, 56, 10, pp. 2071-2096, (2019)
- [9] Pengbo Yang, Jitao Sang, Biao Zhang, Et al., Survey of the interpretability of deep models for image classification[J], Journal of Software, 34, 1, pp. 230-254, (2023)
- [10] Chatonsky G., Deep dream (The Network's Dream)[J], SubStance, 45, 2, pp. 61-77, (2016)