A universal training scheme and the resulting universality for machine learning phases

被引:3
|
作者
Tseng, Yuan-Heng [1 ]
Jiang, Fu-Jiun [1 ]
Huang, C-Y [2 ]
机构
[1] Natl Taiwan Normal Univ, Dept Phys, 88,Sec 4,Ting Chou Rd, Taipei 116, Taiwan
[2] Tunghai Univ, Dept Appl Phys, 1727,Sec 4,Taiwan Blvd, Taichung 40704, Taiwan
来源
关键词
NEURAL-NETWORK; TRANSITIONS;
D O I
10.1093/ptep/ptac173
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
An autoencoder (AE) and a generative adversarial network (GAN) are trained only once on a one-dimensional (1D) lattice of 200 sites. Moreover, the AE contains only one hidden layer consisting of two neurons, and both the generator and the discriminator of the GAN are made up of two neurons as well. The training set employed to train both the considered unsupervised neural networks (NNs) is composed of two artificial configurations. Remarkably, despite their simple architectures, both the built AE and GAN have precisely determined the critical points of several models, including the three-dimensional classical O(3) model, the two-dimensional generalized classical XY model, the two-dimensional two-state Potts model, and the one-dimensional Bose-Hubbard model. In addition, a factor of several thousands in the speed of calculation is gained for the built AE and GAN when they are compared with the conventional unsupervised NN approaches. The results presented here, as well as those shown previously in the literature, suggest that when phase transitions are considered, an elegant universal neural network that is extremely efficient and is applicable to broad physical systems can be constructed with ease. In particular, since an NN trained with two configurations can be applied to many models, it is likely that when machine learning is concerned, the majority of phase transitions belong to a class having two elements, i.e. the Ising class.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Toward the explainability, transparency, and universality of machine learning for behavioral classification in neuroscience
    Goodwin, Nastacia L.
    Nilsson, Simon R. O.
    Choong, Jia Jie
    Golden, Sam A.
    CURRENT OPINION IN NEUROBIOLOGY, 2022, 73
  • [32] A Machine Learning Primer (Final study); Deep Learning is n universal
    Mashita T.
    Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers, 2019, 73 (01): : 85 - 89
  • [33] Machine learning towards a universal situational awareness (USA)
    Rubin, SH
    Lee, GK
    CCCT 2003, VOL6, PROCEEDINGS: COMPUTER, COMMUNICATION AND CONTROL TECHNOLOGIES: III, 2003, : 72 - 74
  • [34] Universal consistency of extreme learning machine for RBFNs case
    Liu, Xia
    Wan, Anhua
    NEUROCOMPUTING, 2015, 168 : 1132 - 1137
  • [35] Universal Machine Learning Kohn–Sham Hamiltonian for Materials
    钟阳
    于宏宇
    杨吉辉
    郭星宇
    向红军
    龚新高
    Chinese Physics Letters, 2024, 41 (07) : 100 - 115
  • [36] Towards a Universal Code Formatter through Machine Learning
    Parr, Terence
    Vinju, Jurgen
    PROCEEDINGS OF THE 2016 ACM SIGPLAN INTERNATIONAL CONFERENCE ON SOFTWARE LANGUAGE ENGINEERING (SLE'16), 2016, : 137 - 151
  • [37] Towards a universal code formatter through machine learning
    2016, Association for Computing Machinery, 2 Penn Plaza, Suite 701, New York, NY 10121-0701, United States
  • [38] Systematic softening in universal machine learning interatomic potentials
    Bowen Deng
    Yunyeong Choi
    Peichen Zhong
    Janosh Riebesell
    Shashwat Anand
    Zhuohan Li
    KyuJung Jun
    Kristin A. Persson
    Gerbrand Ceder
    npj Computational Materials, 11 (1)
  • [39] Harnessing Machine Learning Algorithms in Universal Electronic Payments
    Boustany, Charbel T.
    Razzouk, Jean Paul G.
    Farah, Elias H.
    Proceedings of 2022 1st International Conference on Informatics, ICI 2022, 2022, : 71 - 74
  • [40] On extension theorems and their connection to universal consistency in machine learning
    Christmann, Andreas
    Dumpert, Florian
    Xiang, Dao-Hong
    ANALYSIS AND APPLICATIONS, 2016, 14 (06) : 795 - 808