Explainable Recommendation via Interpretable Feature Mapping and Evaluation of Explainability

被引:0
|
作者
Pan, Deng [1 ]
Li, Xiangrui [1 ]
Li, Xin [1 ]
Zhu, Dongxiao [1 ]
机构
[1] Wayne State Univ, Dept Comp Sci, Detroit, MI 48202 USA
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Latent factor collaborative filtering (CF) has been a widely used technique for recommender system by learning the semantic representations of users and items. Recently, explainable recommendation has attracted much attention from research community. However, trade-off exists between explainability and performance of the recommendation where metadata is often needed to alleviate the dilemma. We present a novel feature mapping approach that maps the uninterpretable general features onto the interpretable aspect features, achieving both satisfactory accuracy and explainability in the recommendations by simultaneous minimization of rating prediction loss and interpretation loss. To evaluate the explainability, we propose two new evaluation metrics specifically designed for aspect-level explanation using surrogate ground truth. Experimental results demonstrate a strong performance in both recommendation and explaining explanation, eliminating the need for metadata. Code is available from https://github.com/pd90506/AMCF.
引用
收藏
页码:2690 / 2696
页数:7
相关论文
共 50 条
  • [1] A Survey of Explainability Methods in Explainable Recommendation Models
    Gao, Guangshang
    [J]. Data Analysis and Knowledge Discovery, 2024, 8 (8-9) : 6 - 19
  • [2] ExpDrug: An explainable drug recommendation model based on space feature mapping
    Lu, Xuan
    Hao, Yanhong
    Peng, Furong
    Zhu, Zheqing
    Cheng, Zhanwen
    [J]. Neurocomputing, 2025, 619
  • [3] ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping
    Bass, Cher
    Da Silva, Mariana
    Sudre, Carole
    Tudosiu, Petru-Daniel
    Smith, Stephen M.
    Robinson, Emma C.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [4] Notions of explainability and evaluation approaches for explainable artificial intelligence
    Vilone, Giulia
    Longo, Luca
    [J]. INFORMATION FUSION, 2021, 76 : 89 - 106
  • [5] An interpretable mechanism for personalized recommendation based on cross feature
    Ye, Lv
    Yang, Yue
    Zeng, Jian-Xu
    [J]. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2021, 40 (05) : 9787 - 9798
  • [6] Accurate and Explainable Recommendation via Review Rationalization
    Pan, Sicheng
    Li, Dongsheng
    Gu, Hansu
    Lu, Tun
    Luo, Xufang
    Gu, Ning
    [J]. PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 3092 - 3101
  • [7] Toward Explainable Recommendation via Counterfactual Reasoning
    Xia, Haiyang
    Li, Qian
    Wang, Zhichao
    Li, Gang
    [J]. ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2023, PT III, 2023, 13937 : 3 - 15
  • [8] Cancer omic data based explainable AI drug recommendation inference: A traceability perspective for explainability
    Xi, Jianing
    Wang, Dan
    Yang, Xuebing
    Zhang, Wensheng
    Huang, Qinghua
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 79
  • [9] Faithfully Explainable Recommendation via Neural Logic Reasoning
    Zhu, Yaxin
    Xian, Yikun
    Fu, Zuohui
    de Melo, Gerard
    Zhang, Yongfeng
    [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 3083 - 3090
  • [10] Explainable feature selection and ensemble classification via feature polarity
    Zhou, Peng
    Liang, Ji
    Yan, Yuanting
    Zhao, Shu
    Wu, Xindong
    [J]. INFORMATION SCIENCES, 2024, 676