Learning hidden chemistry with deep neural networks

被引:2
|
作者
Nguyen, Tien-Cuong [1 ]
Nguyen, Van-Quyen [2 ]
Ngo, Van-Linh [3 ]
Than, Quang-Khoat [3 ]
Pham, Tien-Lam [2 ,4 ]
机构
[1] VNU Univ Sci, 334 Nguyen Trai, Hanoi, Vietnam
[2] Phenikaa Univ, Phenikaa Inst Adv Study PIAS, Hanoi 12116, Vietnam
[3] Hanoi Univ Sci & Technol, 1 Dai Co Viet, Hanoi, Vietnam
[4] Phenikaa Univ, Fac Comp Sci, Hanoi 12116, Vietnam
关键词
Deep learning; Materials informatics; Materials discovery; Materials similarity;
D O I
10.1016/j.commatsci.2021.110784
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
We demonstrate a machine learning approach designed to extract hidden chemistry/physics to facilitate new materials discovery. In particular, we propose a novel method for learning latent knowledge from material structure data in which machine learning models are developed to present the possibility that an atom can be paired with a chemical environment in an observed materials. For this purpose, we trained deep neural networks acquiring information from the atom of interest and its environment to estimate the possibility. The models were then used to establish recommendation systems, which can suggest a list of atoms for an environment within a structure. The center atom of that environment was then replaced with the various recommended atoms to generate new structures. Based on these recommendations, we also propose a method of dissimilarity measurement between the atoms and, through hierarchical cluster analysis and visualization using the multidimensional scaling algorithm, illustrate that this dissimilarity can reflect the chemistry of the elements. Finally, our models were applied to the discovery of new structures in the well-known magnetic material Nd2Fe14B. Our models propose 108 new structures, 71 of which are confirmed to converge to local-minimum-energy structures with formation energy less than +0.1 eV by first-principles calculations.
引用
收藏
页数:7
相关论文
共 50 条
  • [21] Introduction to Machine Learning, Neural Networks, and Deep Learning
    Choi, Rene Y.
    Coyner, Aaron S.
    Kalpathy-Cramer, Jayashree
    Chiang, Michael F.
    Campbell, J. Peter
    TRANSLATIONAL VISION SCIENCE & TECHNOLOGY, 2020, 9 (02):
  • [22] Deep Probabilistic Learning in Hidden Social Networks and Facsimile Detection
    Thovex, Christophe
    2018 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING (ASONAM), 2018, : 731 - 735
  • [23] Deep Neural Networks for Learning Graph Representations
    Cao, Shaosheng
    Lu, Wei
    Xu, Qiongkai
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1145 - 1152
  • [24] Deep learning with coherent VCSEL neural networks
    Zaijun Chen
    Alexander Sludds
    Ronald Davis
    Ian Christen
    Liane Bernstein
    Lamia Ateshian
    Tobias Heuser
    Niels Heermeier
    James A. Lott
    Stephan Reitzenstein
    Ryan Hamerly
    Dirk Englund
    Nature Photonics, 2023, 17 : 723 - 730
  • [25] Inspecting the behaviour of Deep Learning Neural Networks
    Duer, Alexander
    Filzmoser, Peter
    Rauber, Andreas
    ERCIM NEWS, 2019, (116): : 18 - 19
  • [26] Advances in Machine Learning and Deep Neural Networks
    Chellappa, Rama
    Theodoridis, Sergios
    van Schaik, Andre
    PROCEEDINGS OF THE IEEE, 2021, 109 (05) : 607 - 611
  • [27] Learning deep neural networks for node classification
    Li, Bentian
    Pi, Dechang
    EXPERT SYSTEMS WITH APPLICATIONS, 2019, 137 : 324 - 334
  • [28] Abstraction Hierarchy in Deep Learning Neural Networks
    Ilin, Roman
    Watson, Thomas
    Kozma, Robert
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 768 - 774
  • [29] Piecewise linear neural networks and deep learning
    Qinghua Tao
    Li Li
    Xiaolin Huang
    Xiangming Xi
    Shuning Wang
    Johan A. K. Suykens
    Nature Reviews Methods Primers, 2
  • [30] Geometric deep learning and equivariant neural networks
    Gerken, Jan E.
    Aronsson, Jimmy
    Carlsson, Oscar
    Linander, Hampus
    Ohlsson, Fredrik
    Petersson, Christoffer
    Persson, Daniel
    ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (12) : 14605 - 14662