Feature Creation to Enhance Explainability and Predictability of ML Models Using XAI

被引:0
|
作者
Ahmed, Waseem [1 ]
机构
[1] King Abdulaziz Univ, Fac Comp & Informat Technol, Jeddah, Saudi Arabia
关键词
XAI; ML; AI; Recruitment; ARTIFICIAL-INTELLIGENCE;
D O I
10.14569/IJACSA.2024.01510101
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Bringing more transparency to the decision making process in fields deploying ML tools is important in various fields. ML tools need to be designed in such a way that they are more understandable and explainable to end users while bringing trust. The field of XAI, although a mature area of research, is increasingly being seen as a solution to address these missing aspects of ML systems. In this paper, we focus on transparency issues when using ML tools in the decision making process in general, and specifically while recruiting candidates to high-profile positions. In the field of software development, it is important to correctly identify and differentiate highly skilled developers from developers who are adept at only performing regular and mundane programming jobs. If AI is used in the decision process, HR recruiting agents need to justify to their managers why certain candidates were selected and why some were rejected. Online Judges (OJ) are increasingly being used for developer recruitment across various levels attracting thousands of candidates. Automating this decision-making process using ML tools can bring speed while mitigating bias in the selection process. However, the raw and huge dataset available on the OJs need to be well curated and enhanced to make the decision process accurate and explainable. To address this, we built and subsequently enhanced a ML regressor model and the underlying dataset using XAI tools. We evaluated the model to show how XAI can be actively and iteratively used during pre-deployment stage to improve the quality of the dataset and to improve the prediction accuracy of the regression model. We show how these iterative changes helped improve the r2-score of the GradientRegressor model used in our experiments from 0.3507 to 0.9834 (an improvement of 63.27%). We also show how the explainability of LIME and SHAP tools were increased using these steps. A unique contribution of this work is the application of XAI to a very niche area in recruitment, i.e. in the evaluation of performance of users on OJs in software developer recruitment.
引用
收藏
页码:996 / 1007
页数:12
相关论文
共 50 条
  • [1] On Translations between ML Models for XAI Purposes
    de Colnet, Alexis
    Marquis, Pierre
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 3158 - 3166
  • [2] DFU_XAI: A Deep Learning-Based Approach to Diabetic Foot Ulcer Detection Using Feature Explainability
    Shuvo Biswas
    Rafid Mostafiz
    Bikash Kumar Paul
    Khandaker Mohammad Mohi Uddin
    Md. Abdul Hadi
    Fahmida Khanom
    Biomedical Materials & Devices, 2024, 2 (2): : 1225 - 1245
  • [3] An algorithm to optimize explainability using feature ensembles
    Lazebnik, Teddy
    Bunimovich-Mendrazitsky, Svetlana
    Rosenfeld, Avi
    APPLIED INTELLIGENCE, 2024, 54 (02) : 2248 - 2260
  • [4] Formal models and feature creation
    Palmeri, TJ
    BEHAVIORAL AND BRAIN SCIENCES, 1998, 21 (01) : 32 - +
  • [5] An algorithm to optimize explainability using feature ensembles
    Teddy Lazebnik
    Svetlana Bunimovich-Mendrazitsky
    Avi Rosenfeld
    Applied Intelligence, 2024, 54 : 2248 - 2260
  • [6] Visualization Techniques to Enhance the Explainability and Usability of Deep Learning Models in Glaucoma
    Zhang, Xiulan
    Li, Fei
    Wang, Deming
    Lam, Dennis S. C.
    ASIA-PACIFIC JOURNAL OF OPHTHALMOLOGY, 2023, 12 (04): : 347 - 348
  • [7] A new xAI framework with feature explainability for tumors decision-making in Ultrasound data: comparing with Grad-CAM
    Song, Di
    Yao, Jincao
    Jiang, Yitao
    Shi, Siyuan
    Cui, Chen
    Wang, Liping
    Wang, Lijing
    Wu, Huaiyu
    Tian, Hongtian
    Ye, Xiuqin
    Ou, Di
    Li, Wei
    Feng, Na
    Pan, Weiyun
    Song, Mei
    Xu, Jinfeng
    Xu, Dong
    Wu, Linghu
    Dong, Fajin
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 235
  • [8] When explainability turns into a threat- using xAI to fool a fake news detection method
    Kozik, Rafal
    Ficco, Massimo
    Pawlicka, Aleksandra
    Pawlicki, Marek
    Palmieri, Francesco
    Choras, Michal
    COMPUTERS & SECURITY, 2024, 137
  • [9] Feature Interactions on Steroids: On the Composition of ML Models
    Apel, Sven
    Kastner, Christian
    Kang, Eunsuk
    IEEE SOFTWARE, 2022, 39 (03) : 120 - 124
  • [10] Measuring Explainability and Trustworthiness of Power Quality Disturbances Classifiers Using XAI-Explainable Artificial Intelligence
    Machlev, Ram
    Perl, Michael
    Belikov, Juri
    Levy, Kfir Yehuda
    Levron, Yoash
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (08) : 5127 - 5137