Constitutive modeling of heterogeneous materials by interpretable neural networks: A review

被引:0
|
作者
Bilotta, Antonio [1 ]
Turco, Emilio [2 ]
机构
[1] Department of Informatics, Modelling, Electronics and System Engineering (DIMES), University of Calabria, Via P. Bucci, Cubo 42/C, CS, Rende,87036, Italy
[2] Department of Architecture, Design and Urban Planning (DADU), University of Sassari, Palazzo del Pou Salit, Piazza Duomo 6, SS, Alghero,07041, Italy
关键词
Adversarial machine learning - Generative adversarial networks - Neural network models;
D O I
10.3934/nhm.2025012
中图分类号
学科分类号
摘要
Is it possible to interpret the modeling decisions made by a neural network trained to simulate the constitutive behavior of simple or complex materials? The problem of the interpretability of a neural network is a crucial aspect that has been studied since the first appearance of this type of modeling tool and it is certainly not specific to applications related to constitutive modeling of heterogeneous materials. All areas of application, such as computer vision, biomedicine, and speech, suffer from this fuzziness, and for this reason, neural networks are often referred to as black-box models. The present work highlighted the efforts dedicated to this aspect in the constitutive modeling of the behavior of path independent materials, reviewing both more standard neural networks and those adopting, more or less strongly, the specific point of view of interpretability. © 2025 the Author(s), licensee AIMS Press.
引用
收藏
页码:232 / 253
相关论文
共 50 条
  • [21] Interpretable neural networks: principles and applications
    Liu, Zhuoyang
    Xu, Feng
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [22] Interpretable Compositional Convolutional Neural Networks
    Shen, Wen
    Wei, Zhihua
    Huang, Shikun
    Zhang, Binbin
    Fan, Jiaqi
    Zhao, Ping
    Zhang, Quanshi
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2971 - 2978
  • [23] Author Correction: Graph neural networks for an accurate and interpretable prediction of the properties of polycrystalline materials
    Minyi Dai
    Mehmet F. Demirel
    Yingyu Liang
    Jia-Mian Hu
    npj Computational Materials, 8
  • [24] An interpretable framework of data-driven turbulence modeling using deep neural networks
    Jiang, Chao
    Vinuesa, Ricardo
    Chen, Ruilin
    Mi, Junyi
    Laima, Shujin
    Li, Hui
    PHYSICS OF FLUIDS, 2021, 33 (05)
  • [25] Modeling magnetic materials using artificial neural networks
    Saliah, HH
    Lowther, DA
    Forghani, B
    IEEE TRANSACTIONS ON MAGNETICS, 1998, 34 (05) : 3056 - 3059
  • [26] Modeling of materials with fading memory using neural networks
    Oeser, Markus
    Freitag, Steffen
    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, 2009, 78 (07) : 843 - 862
  • [27] A thermodynamics-informed neural network for elastoplastic constitutive modeling of granular materials
    Su, M. M.
    Yu, Y.
    Chen, T. H.
    Guo, N.
    Yang, Z. X.
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2024, 430
  • [28] ExplaiNN: interpretable and transparent neural networks for genomics
    Novakovsky, Gherman
    Fornes, Oriol
    Saraswat, Manu
    Mostafavi, Sara
    Wasserman, Wyeth W. W.
    GENOME BIOLOGY, 2023, 24 (01)
  • [29] Interpretable Deep Neural Networks for Enhancer Prediction
    Kim, Seong Gon
    Theera-Ampornpunt, Nawanol
    Grama, Ananth
    Chaterji, Somali
    PROCEEDINGS 2015 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2015, : 242 - 249
  • [30] Inducing Causal Structure for Interpretable Neural Networks
    Geiger, Atticus
    Wu, Zhengxuan
    Lu, Hanson
    Rozner, Josh
    Kreiss, Elisa
    Icard, Thomas
    Goodman, Noah D.
    Potts, Christopher
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,