Explainability and Dependability Analysis of Learning Automata based AI Hardware

被引:18
|
作者
Shafik, Rishad [1 ]
Wheeldon, Adrian [1 ]
Yakovlev, Alex [1 ]
机构
[1] Newcastle Univ, Sch Engn, Microsyst Res Grp, Newcastle Upon Tyne NE1 7RU, Tyne & Wear, England
基金
英国工程与自然科学研究理事会;
关键词
D O I
10.1109/iolts50870.2020.9159725
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Explainability remains the holy grail in designing the next-generation pervasive artificial intelligence (AI) systems. Current neural network based AI design methods do not naturally lend themselves to reasoning for a decision making process from the input data. A primary reason for this is the overwhelming arithmetic complexity. Built on the foundations of propositional logic and game theory, the principles of learning automata are increasingly gaining momentum for AI hardware design. The lean logic based processing has been demonstrated with significant advantages of energy efficiency and performance. The hierarchical logic underpinning can also potentially provide opportunities for by-design explainable and dependable AI hardware. In this paper, we study explainability and dependability using reachability analysis in two simulation environments. Firstly, we use a behavioral SystemC model to analyze the different state transitions. Secondly, we carry out illustrative fault injection campaigns in a low-level SystemC environment to study how reachability is affected in the presence of hardware stuck-at 1 faults. Our analysis provides the first insights into explainable decision models and demonstrates dependability advantages of learning automata driven AI hardware design.
引用
收藏
页数:4
相关论文
共 50 条
  • [21] A Developmental Learning Based on Learning Automata
    Ruan, Xiaogang
    Dai, Lizhen
    Yang, Gang
    Chen, Jing
    ADVANCES IN NEURAL NETWORKS - ISNN 2011, PT II, 2011, 6676 : 583 - +
  • [22] Programmable cellular automata based Montgomery hardware architecture
    Jeon, Jun-Cheol
    Yoo, Kee-Young
    APPLIED MATHEMATICS AND COMPUTATION, 2007, 186 (01) : 915 - 922
  • [23] Advanced AI Hardware Designs Based on FPGAs
    Kim, Joo-Young
    ELECTRONICS, 2021, 10 (20)
  • [24] Learning Automata Based Sentiment Analysis for Recommender System on Cloud
    Krishna, P. Venkata
    Joshi, Dheeraj
    Misra, Sudip
    Obaidat, Mohammad S.
    2013 INTERNATIONAL CONFERENCE ON COMPUTER, INFORMATION AND TELECOMMUNICATION SYSTEMS (CITS), 2013,
  • [25] Explainability analysis in predictive models based on machine learning techniques on the risk of hospital readmissions
    Juan Camilo Lopera Bedoya
    Jose Lisandro Aguilar Castro
    Health and Technology, 2024, 14 : 93 - 108
  • [26] Explainability analysis in predictive models based on machine learning techniques on the risk of hospital readmissions
    Bedoya, Juan Camilo Lopera
    Castro, Jose Lisandro Aguilar
    HEALTH AND TECHNOLOGY, 2024, 14 (01) : 93 - 108
  • [27] DERIVATIVE-BASED SHAPLEY VALUE FOR GLOBAL SENSITIVITY ANALYSIS AND MACHINE LEARNING EXPLAINABILITY
    Duan, Hui
    Okten, Giray
    INTERNATIONAL JOURNAL FOR UNCERTAINTY QUANTIFICATION, 2025, 15 (01) : 1 - 16
  • [28] Parallel hardware implementation of cellular learning automata based evolutionary computing (CLA-EC) on FPGA
    Hariri, A
    Rastegar, R
    Zamani, MS
    Meybodi, MR
    FCCM 2005: 13TH ANNUAL IEEE SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES, PROCEEDINGS, 2005, : 311 - 313
  • [29] Learning automata based classifier
    Zahiri, Seyed-Hamid
    PATTERN RECOGNITION LETTERS, 2008, 29 (01) : 40 - 48
  • [30] Investigating Explainability of Generative AI for Code through Scenario-based Design
    Sun, Jiao
    Liao, Q. Vera
    Muller, Michael
    Agarwal, Mayank
    Houde, Stephanie
    Talamadupula, Kartik
    Weisz, Justin D.
    IUI'22: 27TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, 2022, : 212 - 228