How easily can neural networks learn relativity?

被引:3
|
作者
Chitturi, Kartik [1 ]
Onyisi, Peter [1 ]
机构
[1] Univ Texas Austin, Dept Phys, Austin, TX 78712 USA
关键词
D O I
10.1088/1742-6596/1085/4/042020
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Relativistic invariants are key variables in high energy physics and are believed to be learned implicitly by deep learning approaches. We investigate the minimum network complexity needed to accurately extract such invariants. Doing so will help us understand how complex a neural network needs to be to obtain certain functions. We find that neural networks do well with predicting transverse momentum of a collision which illustrates the fact that non-linear functions can be learned. On the other hand, invariant mass was much more difficult to predict. Further work will be done to learn the reason why. However the non-linearity of the function can be ruled out as the sole reason.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] How Can Evolution Learn?
    Watson, Richard A.
    Szathmary, Eoers
    TRENDS IN ECOLOGY & EVOLUTION, 2016, 31 (02) : 147 - 157
  • [32] How can artificial neural networks approximate the brain?
    Shao, Feng
    Shen, Zheng
    FRONTIERS IN PSYCHOLOGY, 2023, 13
  • [33] How fast can we learn maximum entropy models of neural populations?
    Ganmor, Elad
    Segev, Ronen
    Schneidman, Elad
    INTERNATIONAL WORKSHOP ON STATISTICAL-MECHANICAL INFORMATICS 2009 (IW-SMI 2009), 2009, 197
  • [34] Recurrent neural networks can learn to implement symbol-sensitive counting
    Rodriguez, P
    Wiles, J
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 10, 1998, 10 : 87 - 93
  • [35] Can point-cloud based neural networks learn fingerprint variability?
    Sollinger, Dominik
    Jochl, Robert
    Kirchgasser, Simon
    Uhl, Andreas
    PROCEEDINGS OF THE 21ST 2022 INTERNATIONAL CONFERENCE OF THE BIOMETRICS SPECIAL INTEREST GROUP (BIOSIG 2022), 2022, P-329
  • [36] AI Pontryagin or how artificial neural networks learn to control dynamical systems
    Lucas Böttcher
    Nino Antulov-Fantulin
    Thomas Asikis
    Nature Communications, 13
  • [37] AI Pontryagin or how artificial neural networks learn to control dynamical systems
    Boettcher, Lucas
    Antulov-Fantulin, Nino
    Asikis, Thomas
    NATURE COMMUNICATIONS, 2022, 13 (01)
  • [38] How Deep Neural Networks Learn Compositional Data: The Random Hierarchy Model
    Cagnetta, Francesco
    Petrini, Leonardo
    Tomasini, Umberto M.
    Favero, Alessandro
    Wyart, Matthieu
    PHYSICAL REVIEW X, 2024, 14 (03):
  • [39] Optimization for problem classes -: Neural networks that learn to learn
    Hüsken, M
    Gayko, JE
    Sendhoff, B
    2000 IEEE SYMPOSIUM ON COMBINATIONS OF EVOLUTIONARY COMPUTATION AND NEURAL NETWORKS, 2000, : 98 - 109
  • [40] HOW EASILY CAN WE LEARN TO RECOGNIZE REGIONAL WALL MOTION ABNORMALITIES WITH 2D-TRANSESOPHAGEAL ECHOCARDIOGRAPHY
    CLEMENTS, FM
    HILL, R
    KISSLO, J
    ORCHARD, R
    DEBRUIJN, NP
    ANESTHESIOLOGY, 1986, 65 (3A) : A478 - A478