Parsimonious neural networks learn interpretable physical laws

被引:19
|
作者
Desai, Saaketh [1 ,2 ]
Strachan, Alejandro [1 ,2 ]
机构
[1] Purdue Univ, Sch Mat Engn, W Lafayette, IN 47907 USA
[2] Purdue Univ, Birck Nanotechnol Ctr, W Lafayette, IN 47907 USA
关键词
D O I
10.1038/s41598-021-92278-w
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Machine learning is playing an increasing role in the physical sciences and significant progress has been made towards embedding domain knowledge into models. Less explored is its use to discover interpretable physical laws from data. We propose parsimonious neural networks (PNNs) that combine neural networks with evolutionary optimization to find models that balance accuracy with parsimony. The power and versatility of the approach is demonstrated by developing models for classical mechanics and to predict the melting temperature of materials from fundamental properties. In the first example, the resulting PNNs are easily interpretable as Newton's second law, expressed as a non-trivial time integrator that exhibits time-reversibility and conserves energy, where the parsimony is critical to extract underlying symmetries from the data. In the second case, the PNNs not only find the celebrated Lindemann melting law, but also new relationships that outperform it in the pareto sense of parsimony vs. accuracy.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] KerGNNs: Interpretable Graph Neural Networks with Graph Kernels
    Feng, Aosong
    You, Chenyu
    Wang, Shiqiang
    Tassiulas, Leandros
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 6614 - 6622
  • [32] Interpretable convolutional neural networks via feedforward design
    Kuo, C-C. Jay
    Zhang, Min
    Li, Siyang
    Duan, Jiali
    Chen, Yueru
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 60 : 346 - 359
  • [33] E pluribus unum interpretable convolutional neural networks
    Dimas, George
    Cholopoulou, Eirini
    Iakovidis, Dimitris K.
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [34] Incorporating Interpretable Output Constraints in Bayesian Neural Networks
    Yang, Wanqian
    Lorch, Lars
    Graule, Moritz A.
    Lakkaraju, Himabindu
    Doshi-Velez, Finale
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [35] NEURAL NETWORKS THAT LEARN TEMPORAL SEQUENCES BY SELECTION
    DEHAENE, S
    CHANGEUX, JP
    NADAL, JP
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 1987, 84 (09) : 2727 - 2731
  • [36] Neural networks learn the motions of molecular machines
    Timothy Grant
    Nature Methods, 2021, 18 : 869 - 871
  • [37] HOW NEURAL NETWORKS LEARN FROM EXPERIENCE
    HINTON, GE
    SCIENTIFIC AMERICAN, 1992, 267 (03) : 145 - 151
  • [38] Can neural networks learn finite elements?
    Novo, Julia
    Terres, Eduardo
    JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2025, 453
  • [39] Neural networks learn the motions of molecular machines
    Grant, Timothy
    NATURE METHODS, 2021, 18 (08) : 869 - 871
  • [40] Neural Networks Learn to Speed Up Simulations
    Edwards, Chris
    COMMUNICATIONS OF THE ACM, 2022, 65 (05) : 27 - 29