Integrated Evolutionary Learning: An Artificial Intelligence Approach to Joint Learning of Features and Hyperparameters for Optimized, Explainable Machine Learning

被引:5
|
作者
de Lacy, Nina [1 ]
Ramshaw, Michael J. [1 ]
Kutz, J. Nathan [2 ]
机构
[1] Univ Utah, Huntsman Mental Hlth Inst, Dept Psychiat, DeLacy Lab, Salt Lake City, UT 84112 USA
[2] Univ Washington, Al Inst Dynam Syst, Dept Appl Math, Seattle, WA USA
来源
基金
美国国家科学基金会;
关键词
artificial intelligence; machine learning; deep learning; optimization; explainability; feature selection; automated; hyperparameter tuning; ALGORITHMS; MOTION;
D O I
10.3389/frai.2022.832530
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial intelligence and machine learning techniques have proved fertile methods for attacking difficult problems in medicine and public health. These techniques have garnered strong interest for the analysis of the large, multi-domain open science datasets that are increasingly available in health research. Discovery science in large datasets is challenging given the unconstrained nature of the learning environment where there may be a large number of potential predictors and appropriate ranges for model hyperparameters are unknown. As well, it is likely that explainability is at a premium in order to engage in future hypothesis generation or analysis. Here, we present a novel method that addresses these challenges by exploiting evolutionary algorithms to optimize machine learning discovery science while exploring a large solution space and minimizing bias. We demonstrate that our approach, called integrated evolutionary learning (IEL), provides an automated, adaptive method for jointly learning features and hyperparameters while furnishing explainable models where the original features used to make predictions may be obtained even with artificial neural networks. In IEL the machine learning algorithm of choice is nested inside an evolutionary algorithm which selects features and hyperparameters over generations on the basis of an information function to converge on an optimal solution. We apply IEL to three gold standard machine learning algorithms in challenging, heterogenous biobehavioral data: deep learning with artificial neural networks, decision tree-based techniques and baseline linear models. Using our novel IEL approach, artificial neural networks achieved >= 95% accuracy, sensitivity and specificity and 45-73% R-2 in classification and substantial gains over default settings. IEL may be applied to a wide range of less- or unconstrained discovery science problems where the practitioner wishes to jointly learn features and hyperparameters in an adaptive, principled manner within the same algorithmic process. This approach offers significant flexibility, enlarges the solution space and mitigates bias that may arise from manual or semi-manual hyperparameter tuning and feature selection and presents the opportunity to select the inner machine learning algorithm based on the results of optimized learning for the problem at hand.
引用
收藏
页数:16
相关论文
共 50 条
  • [21] Artificial intelligence and machine learning
    Kuehl, Niklas
    Schemmer, Max
    Goutier, Marc
    Satzger, Gerhard
    ELECTRONIC MARKETS, 2022, 32 (04) : 2235 - 2244
  • [22] Artificial intelligence and machine learning
    Niklas Kühl
    Max Schemmer
    Marc Goutier
    Gerhard Satzger
    Electronic Markets, 2022, 32 : 2235 - 2244
  • [23] MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE
    Pedoia, V.
    OSTEOARTHRITIS AND CARTILAGE, 2020, 28 : S16 - S16
  • [24] Interactive Collaborative Learning with Explainable Artificial Intelligence
    Arnold, Oksana
    Golchert, Sebastian
    Rennert, Michel
    Jantke, Klaus P.
    LEARNING IN THE AGE OF DIGITAL AND GREEN TRANSITION, ICL2022, VOL 1, 2023, 633 : 13 - 24
  • [25] Polycystic Ovary Syndrome Detection Machine Learning Model Based on Optimized Feature Selection and Explainable Artificial Intelligence
    Elmannai, Hela
    El-Rashidy, Nora
    Mashal, Ibrahim
    Alohali, Manal Abdullah
    Farag, Sara
    El-Sappagh, Shaker
    Saleh, Hager
    DIAGNOSTICS, 2023, 13 (08)
  • [26] Application of explainable artificial intelligence approach to predict student learning outcomes
    Sanfo, Jean-Baptiste M. B.
    JOURNAL OF COMPUTATIONAL SOCIAL SCIENCE, 2025, 8 (01):
  • [27] An interpretable schizophrenia diagnosis framework using machine learning and explainable artificial intelligence
    Shivaprasad, Samhita
    Chadaga, Krishnaraj
    Dias, Cifha Crecil
    Sampathila, Niranjana
    Prabhu, Srikanth
    SYSTEMS SCIENCE & CONTROL ENGINEERING, 2024, 12 (01)
  • [28] Interpretability and Transparency of Machine Learning in File Fragment Analysis with Explainable Artificial Intelligence
    Jinad, Razaq
    Islam, A. B. M.
    Shashidhar, Narasimha
    ELECTRONICS, 2024, 13 (13)
  • [29] Transfer learning-based hybrid VGG16-machine learning approach for heart disease detection with explainable artificial intelligence
    Addisu, Eshetie Gizachew
    Yirga, Tahayu Gizachew
    Yirga, Hailu Gizachew
    Yehuala, Alemu Demeke
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2025, 8
  • [30] Evaluation of Tropical Cyclone Disaster Loss Using Machine Learning Algorithms with an eXplainable Artificial Intelligence Approach
    Liu, Shuxian
    Liu, Yang
    Chu, Zhigang
    Yang, Kun
    Wang, Guanlan
    Zhang, Lisheng
    Zhang, Yuanda
    SUSTAINABILITY, 2023, 15 (16)