Parsimonious neural networks learn interpretable physical laws

被引:19
|
作者
Desai, Saaketh [1 ,2 ]
Strachan, Alejandro [1 ,2 ]
机构
[1] Purdue Univ, Sch Mat Engn, W Lafayette, IN 47907 USA
[2] Purdue Univ, Birck Nanotechnol Ctr, W Lafayette, IN 47907 USA
关键词
D O I
10.1038/s41598-021-92278-w
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Machine learning is playing an increasing role in the physical sciences and significant progress has been made towards embedding domain knowledge into models. Less explored is its use to discover interpretable physical laws from data. We propose parsimonious neural networks (PNNs) that combine neural networks with evolutionary optimization to find models that balance accuracy with parsimony. The power and versatility of the approach is demonstrated by developing models for classical mechanics and to predict the melting temperature of materials from fundamental properties. In the first example, the resulting PNNs are easily interpretable as Newton's second law, expressed as a non-trivial time integrator that exhibits time-reversibility and conserves energy, where the parsimony is critical to extract underlying symmetries from the data. In the second case, the PNNs not only find the celebrated Lindemann melting law, but also new relationships that outperform it in the pareto sense of parsimony vs. accuracy.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Neural networks learn the art of chemical synthesis
    Service, Robert F.
    SCIENCE, 2017, 357 (6346) : 27 - 27
  • [42] Learn to Recognize Actions Through Neural Networks
    Lan, Zhenzhong
    MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, 2015, : 657 - 660
  • [43] Neural networks and how machines learn meaning
    Niculaescu, Oana
    XRDS: Crossroads, 2019, 25 (03): : 64 - 66
  • [44] Recursive neural networks learn to localize faces
    Bianchini, M
    Maggini, M
    Sarti, L
    Scarselli, F
    PATTERN RECOGNITION LETTERS, 2005, 26 (12) : 1885 - 1895
  • [45] USING RECURRENT NEURAL NETWORKS TO LEARN THE STRUCTURE OF INTERCONNECTION NETWORKS
    GOUDREAU, MW
    GILES, CL
    NEURAL NETWORKS, 1995, 8 (05) : 793 - 804
  • [46] Do neural nets learn statistical laws behind natural language?
    Takahashi, Shuntaro
    Tanaka-Ishii, Kumiko
    PLOS ONE, 2017, 12 (12):
  • [47] Watch and learn-a generalized approach for transferrable learning in deep neural networks via physical principles
    Sprague, Kyle
    Carrasquilla, Juan
    Whitelam, Stephen
    Tamblyn, Isaac
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2021, 2 (02):
  • [48] Using Decision Lists to Construct Interpretable and Parsimonious Treatment Regimes
    Zhang, Yichi
    Laber, Eric B.
    Tsiatis, Anastasios
    Davidian, Marie
    BIOMETRICS, 2015, 71 (04) : 895 - 904
  • [49] TAKING LAWS OUT OF TRAINED NEURAL NETWORKS
    Majewski, Jaroslaw
    Wojtyna, Ryszard
    SPA 2010: SIGNAL PROCESSING ALGORITHMS, ARCHITECTURES, ARRANGEMENTS, AND APPLICATIONS CONFERENCE PROCEEDINGS, 2010, : 21 - 24
  • [50] GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks
    Huang, Qiang
    Yamada, Makoto
    Tian, Yuan
    Singh, Dinesh
    Chang, Yi
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (07) : 6968 - 6972