Explainable artificial intelligence models for estimating the heat capacity of deep eutectic solvents

被引:0
|
作者
Alatefi, Saad [1 ]
Agwu, Okorie Ekwe [2 ,3 ]
Amar, Menad Nait [4 ]
Djema, Hakim [4 ]
机构
[1] PAAET, Coll Technol Studies, Dept Petr Engn Technol, Kuwait 70654, Kuwait
[2] Univ Teknol PETRONAS, Petr Engn Dept, Seri Iskandar 32610, Perak Darul Rid, Malaysia
[3] Univ Teknol, Inst Sustainable Energy, Ctr Reservoir Dynam CORED, PETRONAS, Seri Iskandar 32610, Perak, Malaysia
[4] Sonatrach, Dept Etud Thermodynam, Div Labs, Ave 1er Novembre, Boumerdes 35000, Algeria
关键词
Heat capacity; Machine learning; Deep eutectic solvents; DES optimization; MIXTURES; WATER;
D O I
10.1016/j.fuel.2025.135073
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Deep eutectic solvents (DES) are emerging as a promising alternative to traditional solvents due to their attractive characteristics, including low toxicity, biodegradability, ease of synthesis, and cost-effectiveness. Accurate knowledge of the physical properties of DES, such as heat capacity, is critical for their effective utilization in various applications. To complement expensive and time-consuming experimental measurements, this study presents a comprehensive investigation into the application of advanced machine learning techniques, including Convolutional Neural Networks (CNN), Extreme Learning Machine (ELM), and Long Short-Term Memory (LSTM), for modelling the heat capacity of DES. The developed models were trained and validated using an extensive experimentally measured database comprising 2,696 datasets from 55 DES systems, covering a wide range of compositions and temperatures. The CNN model demonstrated superior performance compared to existing heat capacity correlations, achieving an Average Absolute Percentage Error (AAPE) of 0.982%, an R2 of 0.997, and a significantly reduced Root Mean Squared Error. The leverage approach was employed to ensure data reliability and confirm the robustness of the proposed paradigms. Moreover, the study utilized the Shapley Additive Explanations (SHAP) method to enhance the CNN model interpretability and validate the influence of input parameters. Physical validation through detailed trend analysis further confirmed the model's ability to preserve underlying physical relationships. In addition to its predictive accuracy, the proposed CNN model is designed for practical industrial applications. This work demonstrates how the model can be implemented to optimize DES selection and formulation in real-world scenarios, as illustrated by a case study presented in the paper. Overall, this study provides an efficient and reliable tool for the design and optimization of DES, enabling the rapid evaluation of suitable components and compositions while significantly reducing experimental effort.
引用
收藏
页数:22
相关论文
共 50 条
  • [41] Interactions of natural deep eutectic solvents (NADES) with artificial and natural membranes
    Nystedt, Helene Liepelt
    Gronlien, Krister Gjestvang
    Tonnesen, Hanne Hjorth
    JOURNAL OF MOLECULAR LIQUIDS, 2021, 328
  • [42] Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence
    Vikas Hassija
    Vinay Chamola
    Atmesh Mahapatra
    Abhinandan Singal
    Divyansh Goel
    Kaizhu Huang
    Simone Scardapane
    Indro Spinelli
    Mufti Mahmud
    Amir Hussain
    Cognitive Computation, 2024, 16 : 45 - 74
  • [43] Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence
    Hassija, Vikas
    Chamola, Vinay
    Mahapatra, Atmesh
    Singal, Abhinandan
    Goel, Divyansh
    Huang, Kaizhu
    Scardapane, Simone
    Spinelli, Indro
    Mahmud, Mufti
    Hussain, Amir
    COGNITIVE COMPUTATION, 2024, 16 (01) : 45 - 74
  • [44] Human attention guided explainable artificial intelligence for computer vision models
    Liu, Guoyang
    Zhang, Jindi
    Chan, Antoni B.
    Hsiao, Janet H.
    NEURAL NETWORKS, 2024, 177
  • [45] Explainable artificial intelligence (XAI): Precepts, models, and opportunities for research in construction
    Love, Peter E. D.
    Fang, Weili
    Matthews, Jane
    Porter, Stuart
    Luo, Hanbin
    Ding, Lieyun
    ADVANCED ENGINEERING INFORMATICS, 2023, 57
  • [46] The application of explainable artificial intelligence methods to models for automatic creativity assessment
    Panfilova, Anastasia S.
    Valueva, Ekaterina A.
    Ilyin, Ivan Y.
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [47] Explainable Artificial Intelligence and Cardiac Imaging: Toward More Interpretable Models
    Salih, Ahmed
    Galazzo, Ilaria Boscolo
    Gkontra, Polyxeni
    Lee, Aaron Mark
    Lekadir, Karim
    Raisi-Estabragh, Zahra
    Petersen, Steffen E.
    CIRCULATION-CARDIOVASCULAR IMAGING, 2023, 16 (04) : E014519
  • [48] Explainable Artificial Intelligence Models for Predicting Depression Based on Polysomnographic Phenotypes
    Enkhbayar, Doljinsuren
    Ko, Jaehoon
    Oh, Somin
    Ferdushi, Rumana
    Kim, Jaesoo
    Key, Jaehong
    Urtnasan, Erdenebayar
    BIOENGINEERING-BASEL, 2025, 12 (02):
  • [49] Commonsense Reasoning and Explainable Artificial Intelligence Using Large Language Models
    Krause, Stefanie
    Stolzenburg, Frieder
    ARTIFICIAL INTELLIGENCE-ECAI 2023 INTERNATIONAL WORKSHOPS, PT 1, XAI3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, 2023, 2024, 1947 : 302 - 319
  • [50] Applying Explainable Artificial Intelligence Models for Understanding Depression Among IT Workers
    Adarsh, V.
    Gangadharan, G. R.
    IT PROFESSIONAL, 2022, 24 (05) : 25 - 29