Integrated Evolutionary Learning: An Artificial Intelligence Approach to Joint Learning of Features and Hyperparameters for Optimized, Explainable Machine Learning

被引:5
|
作者
de Lacy, Nina [1 ]
Ramshaw, Michael J. [1 ]
Kutz, J. Nathan [2 ]
机构
[1] Univ Utah, Huntsman Mental Hlth Inst, Dept Psychiat, DeLacy Lab, Salt Lake City, UT 84112 USA
[2] Univ Washington, Al Inst Dynam Syst, Dept Appl Math, Seattle, WA USA
来源
基金
美国国家科学基金会;
关键词
artificial intelligence; machine learning; deep learning; optimization; explainability; feature selection; automated; hyperparameter tuning; ALGORITHMS; MOTION;
D O I
10.3389/frai.2022.832530
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial intelligence and machine learning techniques have proved fertile methods for attacking difficult problems in medicine and public health. These techniques have garnered strong interest for the analysis of the large, multi-domain open science datasets that are increasingly available in health research. Discovery science in large datasets is challenging given the unconstrained nature of the learning environment where there may be a large number of potential predictors and appropriate ranges for model hyperparameters are unknown. As well, it is likely that explainability is at a premium in order to engage in future hypothesis generation or analysis. Here, we present a novel method that addresses these challenges by exploiting evolutionary algorithms to optimize machine learning discovery science while exploring a large solution space and minimizing bias. We demonstrate that our approach, called integrated evolutionary learning (IEL), provides an automated, adaptive method for jointly learning features and hyperparameters while furnishing explainable models where the original features used to make predictions may be obtained even with artificial neural networks. In IEL the machine learning algorithm of choice is nested inside an evolutionary algorithm which selects features and hyperparameters over generations on the basis of an information function to converge on an optimal solution. We apply IEL to three gold standard machine learning algorithms in challenging, heterogenous biobehavioral data: deep learning with artificial neural networks, decision tree-based techniques and baseline linear models. Using our novel IEL approach, artificial neural networks achieved >= 95% accuracy, sensitivity and specificity and 45-73% R-2 in classification and substantial gains over default settings. IEL may be applied to a wide range of less- or unconstrained discovery science problems where the practitioner wishes to jointly learn features and hyperparameters in an adaptive, principled manner within the same algorithmic process. This approach offers significant flexibility, enlarges the solution space and mitigates bias that may arise from manual or semi-manual hyperparameter tuning and feature selection and presents the opportunity to select the inner machine learning algorithm based on the results of optimized learning for the problem at hand.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Explainable Artificial Intelligence and Machine Learning
    Raunak, M. S.
    Kuhn, Rick
    COMPUTER, 2021, 54 (10) : 25 - 27
  • [2] A Machine Learning and Explainable Artificial Intelligence Approach for Insurance Fraud Classification
    Khan, Zaid
    Olivia, Diana
    Shetty, Sucharita
    INTELIGENCIA ARTIFICIAL-IBEROAMERICAN JOURNAL OF ARTIFICIAL INTELLIGENCE, 2025, 28 (75): : 140 - 169
  • [3] Explainable artificial intelligence for machine learning prediction of bandgap energies
    Masuda, Taichi
    Tanabe, Katsuaki
    JOURNAL OF APPLIED PHYSICS, 2024, 136 (17)
  • [4] Explainable artificial intelligence and machine learning: A reality rooted perspective
    Emmert-Streib, Frank
    Yli-Harja, Olli
    Dehmer, Matthias
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2020, 10 (06)
  • [5] Advances in Machine Learning and Explainable Artificial Intelligence for Depression Prediction
    Byeon, Haewon
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2023, 14 (06) : 520 - 526
  • [6] XEMLPD: an explainable ensemble machine learning approach for Parkinson disease diagnosis with optimized features
    Fahmida Khanom
    Shuvo Biswas
    Mohammad Shorif Uddin
    Rafid Mostafiz
    International Journal of Speech Technology, 2024, 27 (4) : 1055 - 1083
  • [7] Exploring the Efficacy of Artificial Intelligence in Speed Prediction: Explainable Machine-Learning Approach
    Jain, Vineet
    Chouhan, Rajesh
    Dhamaniya, Ashish
    JOURNAL OF COMPUTING IN CIVIL ENGINEERING, 2025, 39 (02)
  • [8] Explainable Artificial Intelligence (XAI) Approach for Reinforcement Learning Systems
    Peixoto, Maria J. P.
    Azim, Akramul
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 971 - 978
  • [9] Explainable artificial intelligence for stroke prediction through comparison of deep learning and machine learning models
    Moulaei, Khadijeh
    Afshari, Lida
    Moulaei, Reza
    Sabet, Babak
    Mousavi, Seyed Mohammad
    Afrash, Mohammad Reza
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [10] Artificial Intelligence, Machine Learning and Deep Learning
    Ongsulee, Pariwat
    2017 15TH INTERNATIONAL CONFERENCE ON ICT AND KNOWLEDGE ENGINEERING (ICT&KE), 2017, : 92 - 97