Interpretability and Fairness in Machine Learning: A Formal Methods Approach

被引:0
|
作者
Ghosh, Bishwamittra [1 ]
机构
[1] Natl Univ Singapore, Sch Comp, Singapore, Singapore
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The last decades have witnessed significant progress in machine learning with applications in different safety-critical domains, such as medical, law, education, and transportation. In high-stake domains, machine learning predictions have farreaching consequences on the end-users. With the aim of applying machine learning for societal goods, there have been increasing efforts to regulate machine learning by imposing interpretability, fairness, robustness, privacy, etc. in predictions. Towards responsible and trustworthy machine learning, we propose two research themes in our dissertation research: interpretability and fairness of machine learning classifiers. In particular, we design algorithms to learn interpretable rule-based classifiers, formally verify fairness, and explain the sources of unfairness. Prior approaches to these problems are often limited by scalability, accuracy, or both. To overcome these limitations, we closely integrate automated reasoning and formal methods with fairness and interpretability to develop scalable and accurate solutions.
引用
收藏
页码:7083 / 7084
页数:2
相关论文
共 50 条
  • [21] Structural performance assessment of GFRP elastic gridshells by machine learning interpretability methods
    Kookalani, Soheila
    Cheng, Bin
    Torres, Jose Luis Chavez
    FRONTIERS OF STRUCTURAL AND CIVIL ENGINEERING, 2022, 16 (10) : 1249 - 1266
  • [22] An AI Framework for Modelling and Evaluating Attribution Methods in Enhanced Machine Learning Interpretability
    Cuzzocrea, Alfredo
    Ratul, Qudrat E. Alahy
    Belmerabet, Islam
    Serra, Edoardo
    2023 IEEE 47TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC, 2023, : 1030 - 1036
  • [23] Evaluating Performance and Interpretability of Machine Learning Methods for Predicting Delirium in Gerontopsychiatric Patients
    Netzer, Michael
    Hackl, Werner O.
    Schaller, Michael
    Alber, Lisa
    Marksteiner, Josef
    Ammenwerth, Elske
    DHEALTH 2020 - BIOMEDICAL INFORMATICS FOR HEALTH AND CARE, 2020, 271 : 121 - 128
  • [24] Games for Fairness and Interpretability
    Chu, Eric
    Gillani, Nabeel
    Makini, Sneha Priscilla
    WWW'20: COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2020, 2020, : 520 - 524
  • [25] A Study on Interpretability of Decision of Machine Learning
    Shirataki, Shohei
    Yamaguchi, Saneyasu
    2017 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2017, : 4830 - 4831
  • [26] A Review of Framework for Machine Learning Interpretability
    Araujo, Ivo de Abreu
    Torres, Renato Hidaka
    Sampaio Neto, Nelson Cruz
    AUGMENTED COGNITION, AC 2022, 2022, 13310 : 261 - 272
  • [27] An epistemic approach to the formal specification of statistical machine learning
    Yusuke Kawamoto
    Software and Systems Modeling, 2021, 20 : 293 - 310
  • [28] An epistemic approach to the formal specification of statistical machine learning
    Kawamoto, Yusuke
    SOFTWARE AND SYSTEMS MODELING, 2021, 20 (02): : 293 - 310
  • [29] Framework for Bias Detection in Machine Learning Models: A Fairness Approach
    Rosado Gomez, Alveiro Alonso
    Calderon Benavides, Maritza Liliana
    PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, : 1152 - 1154
  • [30] A novel approach for assessing fairness in deployed machine learning algorithms
    Uddin, Shahadat
    Lu, Haohui
    Rahman, Ashfaqur
    Gao, Junbin
    SCIENTIFIC REPORTS, 2024, 14 (01):