A Survey on Explainable Artificial Intelligence Techniques and Challenges

被引:24
|
作者
Hanif, Ambreen [1 ]
Zhang, Xuyun [1 ]
Wood, Steven [2 ]
机构
[1] Macquarie Univ, Dept Comp, Sydney, NSW, Australia
[2] Prospa, Sydney, NSW, Australia
关键词
Interpretable Machine Learning; Explainable Artificial Intelligence; Survey; Machine Learning; Knowledge-intensive; Trustworthy;
D O I
10.1109/EDOCW52865.2021.00036
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In the last decade, the world has envisioned tremendous growth in technology with improved accessibility of data, cloud-computing resources, and the evolution of machine learning (ML) algorithms. The intelligent system has achieved significant performance with this growth. The state-of-the-art performance of these algorithms in various domains has increased the popularity of artificial intelligence (AI). However, alongside these achievements, the non-transparency, inscrutability and inability to expound and interpret the majority of the state-of-the-art techniques are considered an ethical issue. These flaws in AI algorithms impede the acceptance of complex ML models in a variety of fields such as medical, banking and finance, security, and education. These shortcomings have prompted many concerns about the security and safety of ML system users. These systems must be transparent, according to the current regulations and policies, in order to meet the right to explanation. Due to a lack of trust in existing ML-based systems, explainable artificial intelligence (XAI)-based methods are gaining popularity. Although neither the domain nor the methods are novel, they are gaining popularity due to their ability to unbox the black box. The explainable AI methods are of varying strengths, and they are capable of providing insights to the system. These insights can be ranging from a single feature explanation to the interpretability of sophisticated ML architecture. In this paper, we present a survey of known techniques in the field of XAI. Moreover, we suggest future research routes for developing AI systems that can be responsible. We emphasize the necessity of human knowledge-oriented systems for adopting AI in real-world applications with trust and high fidelity.
引用
收藏
页码:81 / 89
页数:9
相关论文
共 50 条
  • [1] Explainable Artificial Intelligence 101: Techniques, Applications and Challenges
    Kurek, Wiktor
    Pawlicki, Marek
    Pawlicka, Aleksandra
    Kozik, Rafal
    Choras, Michal
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT IV, 2023, 14089 : 310 - 318
  • [2] Explainable Artificial Intelligence: A Survey
    Dosilovic, Filip Karlo
    Brcic, Mario
    Hlupic, Nikica
    2018 41ST INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), 2018, : 210 - 215
  • [3] Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction
    Mersha, Melkamu
    Lam, Khang
    Wood, Joseph
    Alshami, Ali K.
    Kalita, Jugal
    NEUROCOMPUTING, 2024, 599
  • [4] A survey of explainable artificial intelligence decision
    Kong X.
    Tang X.
    Wang Z.
    Xitong Gongcheng Lilun yu Shijian/System Engineering Theory and Practice, 2021, 41 (02): : 524 - 536
  • [5] A Survey on Explainable Artificial Intelligence for Cybersecurity
    Rjoub, Gaith
    Bentahar, Jamal
    Wahab, Omar Abdel
    Mizouni, Rabeb
    Song, Alyssa
    Cohen, Robin
    Otrok, Hadi
    Mourad, Azzam
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (04): : 5115 - 5140
  • [6] Argumentation and explainable artificial intelligence: a survey
    Vassiliades, Alexandros
    Bassiliades, Nick
    Patkos, Theodore
    KNOWLEDGE ENGINEERING REVIEW, 2021, 36
  • [7] Explainable Artificial Intelligence in CyberSecurity: A Survey
    Capuano, Nicola
    Fenza, Giuseppe
    Loia, Vincenzo
    Stanzione, Claudio
    IEEE ACCESS, 2022, 10 : 93575 - 93600
  • [8] Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks
    Nazir, Sajid
    Dickson, Diane M.
    Akram, Muhammad Usman
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 156
  • [9] The challenges of integrating explainable artificial intelligence into GeoAI
    Xing, Jin
    Sieber, Renee
    TRANSACTIONS IN GIS, 2023, 27 (03) : 626 - 645
  • [10] Comparing techniques for TEmporal eXplainable Artificial Intelligence
    Canti, Edoardo
    Collini, Enrico
    Palesi, Luciano Alessandro Ipsaro
    Nesi, Paolo
    2024 IEEE 10TH INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING SERVICE AND MACHINE LEARNING APPLICATIONS, BIGDATASERVICE 2024, 2024, : 87 - 91