Disentangled Representation Learning for Astronomical Chemical Tagging

被引:11
|
作者
de Mijolla, Damien [1 ]
Ness, Melissa Kay [2 ,3 ]
Viti, Serena [1 ,4 ]
Wheeler, Adam Joseph [2 ]
机构
[1] UCL, Dept Phys & Astron, Gower St, London WC1E 6BT, England
[2] Columbia Univ, Dept Astron, Pupin Phys Labs, New York, NY 10027 USA
[3] Flatiron Inst, Ctr Computat Astrophys, 162 Fifth Ave, New York, NY 10010 USA
[4] Leiden Univ, Leiden Observ, POB 9513, NL-2300 RA Leiden, Netherlands
来源
ASTROPHYSICAL JOURNAL | 2021年 / 913卷 / 01期
关键词
STARS; ABUNDANCES; NITROGEN; CARBON;
D O I
10.3847/1538-4357/abece1
中图分类号
P1 [天文学];
学科分类号
0704 ;
摘要
Modern astronomical surveys are observing spectral data for millions of stars. These spectra contain chemical information that can be used to trace the Galaxy's formation and chemical enrichment history. However, extracting the information from spectra and making precise and accurate chemical abundance measurements is challenging. Here we present a data-driven method for isolating the chemical factors of variation in stellar spectra from those of other parameters (i.e., T (eff), log g, [Fe/H]). This enables us to build a spectral projection for each star with these parameters removed. We do this with no ab initio knowledge of elemental abundances themselves and hence bypass the uncertainties and systematics associated with modeling that rely on synthetic stellar spectra. To remove known nonchemical factors of variation, we develop and implement a neural network architecture that learns a disentangled spectral representation. We simulate our recovery of chemically identical stars using the disentangled spectra in a synthetic APOGEE-like data set. We show that this recovery declines as a function of the signal-to-noise ratio but that our neural network architecture outperforms simpler modeling choices. Our work demonstrates the feasibility of data-driven abundance-free chemical tagging.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Disentangled Representation Learning
    Wang, Xin
    Chen, Hong
    Tang, Si'ao
    Wu, Zihao
    Zhu, Wenwu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 9677 - 9696
  • [2] A Review of Disentangled Representation Learning
    Wen Z.-D.
    Wang J.-R.
    Wang X.-X.
    Pan Q.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (02): : 351 - 374
  • [3] Disentangled Representation Learning for Multimedia
    Wang, Xin
    Chen, Hong
    Zhu, Wenwu
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 9702 - 9704
  • [4] Disentangled Representation Learning for Recommendation
    Wang, Xin
    Chen, Hong
    Zhou, Yuwei
    Ma, Jianxin
    Zhu, Wenwu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (01) : 408 - 424
  • [5] Temporally Disentangled Representation Learning
    Yao, Weiran
    Chen, Guangyi
    Zhang, Kun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] Learning disentangled representation for classical models
    Huang, Dongchen
    Hu, Danqing
    Yang, Yi-feng
    PHYSICAL REVIEW B, 2022, 105 (24)
  • [7] Learning Disentangled Representation for Chromosome Straightening
    Liu, Tao
    Peng, Yifeng
    Chen, Ran
    Lai, Yi
    Zhang, Haoxi
    Szczerbicki, Edward
    CYBERNETICS AND SYSTEMS, 2023,
  • [8] An Evaluation of Disentangled Representation Learning for Texts
    Vishnubhotla, Krishnapriya
    Hirst, Graeme
    Rudzicz, Frank
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 1939 - 1951
  • [9] Disentangled Multimodal Representation Learning for Recommendation
    Liu, Fan
    Chen, Huilin
    Cheng, Zhiyong
    Liu, Anan
    Nie, Liqiang
    Kankanhalli, Mohan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 7149 - 7159
  • [10] Learning Disentangled Representation with Pairwise Independence
    Li, Zejian
    Tang, Yongchuan
    Li, Wei
    He, Yongxing
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 4245 - 4252