Parsimonious neural networks learn interpretable physical laws

被引:19
|
作者
Desai, Saaketh [1 ,2 ]
Strachan, Alejandro [1 ,2 ]
机构
[1] Purdue Univ, Sch Mat Engn, W Lafayette, IN 47907 USA
[2] Purdue Univ, Birck Nanotechnol Ctr, W Lafayette, IN 47907 USA
关键词
D O I
10.1038/s41598-021-92278-w
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Machine learning is playing an increasing role in the physical sciences and significant progress has been made towards embedding domain knowledge into models. Less explored is its use to discover interpretable physical laws from data. We propose parsimonious neural networks (PNNs) that combine neural networks with evolutionary optimization to find models that balance accuracy with parsimony. The power and versatility of the approach is demonstrated by developing models for classical mechanics and to predict the melting temperature of materials from fundamental properties. In the first example, the resulting PNNs are easily interpretable as Newton's second law, expressed as a non-trivial time integrator that exhibits time-reversibility and conserves energy, where the parsimony is critical to extract underlying symmetries from the data. In the second case, the PNNs not only find the celebrated Lindemann melting law, but also new relationships that outperform it in the pareto sense of parsimony vs. accuracy.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Parsimonious neural networks learn interpretable physical laws
    Saaketh Desai
    Alejandro Strachan
    Scientific Reports, 11
  • [2] Interpretable Performance Models for Energetic Materials using Parsimonious Neural Networks
    Appleton, Robert J.
    Salek, Peter
    Casey, Alex D.
    Barnes, Brian C.
    Son, Steven F.
    Strachan, Alejandro
    JOURNAL OF PHYSICAL CHEMISTRY A, 2024, 128 (06): : 1142 - 1153
  • [3] Interpretable Convolutional Neural Networks
    Zhang, Quanshi
    Wu, Ying Nian
    Zhu, Song-Chun
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 8827 - 8836
  • [4] Robust learning of parsimonious deep neural networks
    Guenter, Valentin Frank Ingmar
    Sideris, Athanasios
    NEUROCOMPUTING, 2024, 566
  • [5] Interpretable generalized additive neural networks
    Kraus, Mathias
    Tschernutter, Daniel
    Weinzierl, Sven
    Zschech, Patrick
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2024, 317 (02) : 303 - 316
  • [6] Interpretable neural networks: principles and applications
    Liu, Zhuoyang
    Xu, Feng
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [7] Interpretable Compositional Convolutional Neural Networks
    Shen, Wen
    Wei, Zhihua
    Huang, Shikun
    Zhang, Binbin
    Fan, Jiaqi
    Zhao, Ping
    Zhang, Quanshi
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2971 - 2978
  • [8] Physics Interpretable Shallow-Deep Neural Networks for Physical System Identification with Unobservability
    Yuan, Jingyi
    Weng, Yang
    2021 21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2021), 2021, : 847 - 856
  • [9] Optimization for problem classes -: Neural networks that learn to learn
    Hüsken, M
    Gayko, JE
    Sendhoff, B
    2000 IEEE SYMPOSIUM ON COMBINATIONS OF EVOLUTIONARY COMPUTATION AND NEURAL NETWORKS, 2000, : 98 - 109
  • [10] ExplaiNN: interpretable and transparent neural networks for genomics
    Novakovsky, Gherman
    Fornes, Oriol
    Saraswat, Manu
    Mostafavi, Sara
    Wasserman, Wyeth W. W.
    GENOME BIOLOGY, 2023, 24 (01)