Capturing the form of feature interactions in black-box models

被引:0
|
作者
Zhang, Hanying [1 ,2 ]
Zhang, Xiaohang [1 ,2 ]
Zhang, Tianbo [3 ]
Zhu, Ji [4 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Econ & Management, Beijing, Peoples R China
[2] Beijing Univ Posts & Telecommun, Key Lab Trustworthy Distributed Comp & Serv, Beijing, Peoples R China
[3] Univ Washington, Dept Math, Seattle, WA USA
[4] Univ Michigan, Dept Stat, Ann Arbor, MI USA
基金
中国国家自然科学基金;
关键词
Model interpretation; Feature interaction; Product separability; Black-box; PERFORMANCE; FIND;
D O I
10.1016/j.ipm.2023.103373
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Detecting feature interactions is an important post-hoc method to explain black-box models. The literature on feature interactions mainly focus on detecting their existence and calculating their strength. Little attention has been given to the form how the features interact. In this paper, we propose a novel method to capture the form of feature interactions. First, the feature interaction sets in black-box models are detected by the high dimensional model representation-based method. Second, the pairwise separability of the detected feature interactions is determined by a novel model which is verified theoretically. Third, the set separability of the feature interactions is inferred based on pairwise separability. Fourth, the interaction form of each feature in product separable sets is explored. The proposed method not only provides detailed information about the internal structure of black-box models but also improves the performance of linear models by incorporating the appropriate feature interactions. The experimental results show that the accuracy of recognizing product separability in synthetic models is 100%. Experiments on three regression and three classification tasks demonstrate that the proposed method can capture the product separable form of feature interactions effectively and improve the prediction accuracy greatly.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Feature Importance Explanations for Temporal Black-Box Models
    Sood, Akshay
    Craven, Mark
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8351 - 8360
  • [2] Do Intermediate Feature Coalitions Aid Explainability of Black-Box Models?
    Patil, Minal Suresh
    Framling, Kary
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT I, 2023, 1901 : 115 - 130
  • [3] Discovering Unexpected Local Nonlinear Interactions in Scientific Black-box Models
    Doron, Michael
    Segev, Idan
    Shahaf, Dafna
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 425 - 435
  • [4] Interpretable Companions for Black-Box Models
    Pan, Danqing
    Wang, Tong
    Hara, Satoshi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2444 - 2453
  • [5] Causal Interpretations of Black-Box Models
    Zhao, Qingyuan
    Hastie, Trevor
    JOURNAL OF BUSINESS & ECONOMIC STATISTICS, 2021, 39 (01) : 272 - 281
  • [6] Significance Tests of Feature Relevance for a Black-Box Learner
    Dai, Ben
    Shen, Xiaotong
    Pan, Wei
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 1898 - 1911
  • [7] OneMax in Black-Box Models with Several Restrictions
    Carola Doerr
    Johannes Lengler
    Algorithmica, 2017, 78 : 610 - 640
  • [8] ONEMAX in Black-Box Models with Several Restrictions
    Doerr, Carola
    Lengler, Johannes
    ALGORITHMICA, 2017, 78 (02) : 610 - 640
  • [9] Testing Framework for Black-box AI Models
    Aggarwal, Aniya
    Shaikh, Samiulla
    Hans, Sandeep
    Haldar, Swastik
    Ananthanarayanan, Rema
    Saha, Diptikalyan
    2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS (ICSE-COMPANION 2021), 2021, : 81 - 84
  • [10] Auditing black-box models for indirect influence
    Adler, Philip
    Falk, Casey
    Friedler, Sorelle A.
    Nix, Tionney
    Rybeck, Gabriel
    Scheidegger, Carlos
    Smith, Brandon
    Venkatasubramanian, Suresh
    KNOWLEDGE AND INFORMATION SYSTEMS, 2018, 54 (01) : 95 - 122