Explainable Artificial Intelligence Methods to Enhance Transparency and Trust in Digital Deliberation Settings

被引:0
|
作者
Siachos, Ilias [1 ]
Karacapilidis, Nikos [1 ]
机构
[1] Univ Patras, Ind Management & Informat Syst Lab, Rion 26504, Greece
关键词
digital deliberation; explainable artificial intelligence; clustering; summarization; ALGORITHM;
D O I
10.3390/fi16070241
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Digital deliberation has been steadily growing in recent years, enabling citizens from different geographical locations and diverse opinions and expertise to participate in policy-making processes. Software platforms aiming to support digital deliberation usually suffer from information overload, due to the large amount of feedback that is often provided. While Machine Learning and Natural Language Processing techniques can alleviate this drawback, their complex structure discourages users from trusting their results. This paper proposes two Explainable Artificial Intelligence models to enhance transparency and trust in the modus operandi of the above techniques, which concern the processes of clustering and summarization of citizens' feedback that has been uploaded on a digital deliberation platform.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Examining Correlation Between Trust and Transparency with Explainable Artificial Intelligence
    Kartikeya, Arnav
    [J]. INTELLIGENT COMPUTING, VOL 2, 2022, 507 : 353 - 358
  • [2] Transparency and trust in artificial intelligence systems
    Schmidt, Philipp
    Biessmann, Felix
    Teubner, Timm
    [J]. JOURNAL OF DECISION SYSTEMS, 2020, 29 (04) : 260 - 278
  • [3] Explainable artificial intelligence for digital forensics
    Hall, Stuart W.
    Sakzad, Amin
    Choo, Kim-Kwang Raymond
    [J]. WILEY INTERDISCIPLINARY REVIEWS: FORENSIC SCIENCE, 2022, 4 (02):
  • [4] How transparency modulates trust in artificial intelligence
    Zerilli, John
    Bhatt, Umang
    Weller, Adrian
    [J]. PATTERNS, 2022, 3 (04):
  • [5] Enhancing Transparency through Explainable Artificial Intelligence: An Exploratory Analysis on a Collusion and Corruption Scenario in Digital Government
    Ballhausen Sampaio, Igor Garcia
    Barbosa Fontes, Sergio de Souza
    Andrade, Eduardo de Oliveira
    Bernardini, Flavia
    Viterbo, Jose
    [J]. PROCEEDINGS OF THE 25TH ANNUAL INTERNATIONAL CONFERENCE ON DIGITAL GOVERNMENT RESEARCH, DGO 2024, 2024, : 52 - 62
  • [6] Artificial Intelligence Limitations: Blockchain Trust and Communication Transparency
    Leshchev, Sergey, V
    [J]. BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES 2021, 2022, 1032 : 249 - 254
  • [7] Healthcare Trust Evolution with Explainable Artificial Intelligence: Bibliometric Analysis
    Dhiman, Pummy
    Bonkra, Anupam
    Kaur, Amandeep
    Gulzar, Yonis
    Hamid, Yasir
    Mir, Mohammad Shuaib
    Soomro, Arjumand Bano
    Elwasila, Osman
    [J]. INFORMATION, 2023, 14 (10)
  • [8] Explainable artificial intelligence for digital finance and consumption upgrading
    Zhou, Linjiang
    Shi, Xiaochuan
    Bao, Yaxiong
    Gao, Lihua
    Ma, Chao
    [J]. FINANCE RESEARCH LETTERS, 2023, 58
  • [9] Explainable artificial intelligence for mental health through transparency and interpretability for understandability
    Dan W. Joyce
    Andrey Kormilitzin
    Katharine A. Smith
    Andrea Cipriani
    [J]. npj Digital Medicine, 6
  • [10] Explainable Artificial Intelligence (XAI) to Enhance Trust Management in Intrusion Detection Systems Using Decision Tree Model
    Mahbooba, Basim
    Timilsina, Mohan
    Sahal, Radhya
    Serrano, Martin
    [J]. COMPLEXITY, 2021, 2021