Constitutive modeling of heterogeneous materials by interpretable neural networks: A review

被引:0
|
作者
Bilotta, Antonio [1 ]
Turco, Emilio [2 ]
机构
[1] Department of Informatics, Modelling, Electronics and System Engineering (DIMES), University of Calabria, Via P. Bucci, Cubo 42/C, CS, Rende,87036, Italy
[2] Department of Architecture, Design and Urban Planning (DADU), University of Sassari, Palazzo del Pou Salit, Piazza Duomo 6, SS, Alghero,07041, Italy
关键词
Adversarial machine learning - Generative adversarial networks - Neural network models;
D O I
10.3934/nhm.2025012
中图分类号
学科分类号
摘要
Is it possible to interpret the modeling decisions made by a neural network trained to simulate the constitutive behavior of simple or complex materials? The problem of the interpretability of a neural network is a crucial aspect that has been studied since the first appearance of this type of modeling tool and it is certainly not specific to applications related to constitutive modeling of heterogeneous materials. All areas of application, such as computer vision, biomedicine, and speech, suffer from this fuzziness, and for this reason, neural networks are often referred to as black-box models. The present work highlighted the efforts dedicated to this aspect in the constitutive modeling of the behavior of path independent materials, reviewing both more standard neural networks and those adopting, more or less strongly, the specific point of view of interpretability. © 2025 the Author(s), licensee AIMS Press.
引用
收藏
页码:232 / 253
相关论文
共 50 条
  • [31] Landslide susceptibility modeling by interpretable neural network
    K. Youssef
    K. Shao
    S. Moon
    L.-S. Bouchard
    Communications Earth & Environment, 4
  • [32] Interpretable Architecture Neural Networks for Function Visualization
    Zhang, Shengtong
    Apley, Daniel W. W.
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2023, 32 (04) : 1258 - 1271
  • [33] Synchronization-Inspired Interpretable Neural Networks
    Han, Wei
    Qin, Zhili
    Liu, Jiaming
    Boehm, Christian
    Shao, Junming
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16762 - 16774
  • [34] ExplaiNN: interpretable and transparent neural networks for genomics
    Gherman Novakovsky
    Oriol Fornes
    Manu Saraswat
    Sara Mostafavi
    Wyeth W. Wasserman
    Genome Biology, 24
  • [35] A review of predictive nonlinear theories for multiscale modeling of heterogeneous materials
    Matous, Karel
    Geers, Marc G. D.
    Kouznetsova, Varvara G.
    Gillman, Andrew
    JOURNAL OF COMPUTATIONAL PHYSICS, 2017, 330 : 192 - 220
  • [36] Interpretable Nonlinear Dynamic Modeling of Neural Trajectories
    Zhao, Yuan
    Park, Il Memming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [37] Transformer neural networks for interpretable flood forecasting
    Castangia, Marco
    Grajales, Lina Maria Medina
    Aliberti, Alessandro
    Rossi, Claudio
    Macii, Alberto
    Macii, Enrico
    Patti, Edoardo
    ENVIRONMENTAL MODELLING & SOFTWARE, 2023, 160
  • [38] Landslide susceptibility modeling by interpretable neural network
    Youssef, K.
    Shao, K.
    Moon, S.
    Bouchard, L. -s.
    COMMUNICATIONS EARTH & ENVIRONMENT, 2023, 4 (01):
  • [39] Interpretable Convolutional Neural Networks with Dual Local and Global Attention for Review Rating Prediction
    Seo, Sungyong
    Huang, Jing
    Yang, Hao
    Liu, Yan
    PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS'17), 2017, : 297 - 305
  • [40] A microstructure-guided constitutive modeling approach for random heterogeneous materials: Application to structural binders
    Das, Sumanta
    Maroli, Amit
    Singh, Sudhanshu S.
    Stannard, Tyler
    Xiao, Xianghui
    Chawla, Nikhilesh
    Neithalath, Narayanan
    COMPUTATIONAL MATERIALS SCIENCE, 2016, 119 : 52 - 64