Improving convolutional neural networks for cosmological fields with random permutation

被引:0
|
作者
Zhong, Kunhao [1 ]
Gatti, Marco [1 ]
Jain, Bhuvnesh [1 ]
机构
[1] Univ Penn, Dept Phys & Astron, Philadelphia, PA 19104 USA
关键词
DARK ENERGY SURVEY; DEEP LEARNING APPROACH; H II REGIONS; WEAK; INFERENCE; PEAKS; IDENTIFICATION; ASTROPHYSICS; REIONIZATION; STATISTICS;
D O I
10.1103/PhysRevD.110.043535
中图分类号
P1 [天文学];
学科分类号
0704 ;
摘要
Convolutional neural networks (CNNs) have recently been applied to cosmological fields-weak lensing mass maps and Galaxy maps. However, cosmological maps differ in several ways from the vast majority of images that CNNs have been tested on: they are stochastic, typically low signal-to-noise per pixel, and with correlations on all scales. Further, the cosmology goal is a regression problem aimed at inferring posteriors on parameters that must be unbiased. We explore simple CNN architectures and present a novel approach of regularization and data augmentation to improve its performance for lensing mass maps. We find robust improvement by using a mixture of pooling and shuffling of the pixels in the deep layers. The random permutation regularizes the network in the low signal-to-noise regime and effectively augments the existing data. We use simulation-based inference to show that the model outperforms CNN designs in the literature. Including systematic uncertainties such as intrinsic alignments, we find a 30% improvement over unoptimized CNNs and power spectrum in the constraints of the S8 parameter for simulated Stage-III surveys. We explore various statistical errors corresponding to next-generation surveys and find comparable improvements. We expect that our approach will have applications to other cosmological fields as well, such as Galaxy maps or 21-cm maps.
引用
收藏
页数:18
相关论文
共 50 条
  • [41] Improving Fingerprint Indoor Localization Using Convolutional Neural Networks
    Sun, Danshi
    Wei, Erhu
    Yang, Li
    Xu, Shiyi
    IEEE ACCESS, 2020, 8 : 193396 - 193411
  • [42] Improving discrimination ability of convolutional neural networks by hybrid learning
    Kim, In-Jung
    Choi, Changbeom
    Lee, Sang-Heon
    INTERNATIONAL JOURNAL ON DOCUMENT ANALYSIS AND RECOGNITION, 2016, 19 (01) : 1 - 9
  • [43] Improving Performance of Convolutional Neural Networks via Feature Embedding
    Ghoshal, Torumoy
    Zhang, Silu
    Dang, Xin
    Wilkins, Dawn
    Chen, Yixin
    PROCEEDINGS OF THE 2019 ANNUAL ACM SOUTHEAST CONFERENCE (ACMSE 2019), 2019, : 31 - 38
  • [44] Improving accuracy of Pedestrian Detection using Convolutional Neural Networks
    Esfandiari, Neda
    Bastanfard, Azam
    2020 6TH IRANIAN CONFERENCE ON SIGNAL PROCESSING AND INTELLIGENT SYSTEMS (ICSPIS), 2020,
  • [45] Improving explainability results of convolutional neural networks in microscopy images
    Athanasios Kallipolitis
    Panayiotis Yfantis
    Ilias Maglogiannis
    Neural Computing and Applications, 2023, 35 : 21535 - 21553
  • [46] Merging and Evolution: Improving Convolutional Neural Networks for Mobile Applications
    Qin, Zheng
    Zhang, Zhaoning
    Zhang, Shiqing
    Yu, Hao
    Li, Jincai
    Peng, Yuxing
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [47] IMPROVING DEEP CONVOLUTIONAL NEURAL NETWORKS WITH UNSUPERVISED FEATURE LEARNING
    Kien Nguyen
    Fookes, Clinton
    Sridharan, Sridha
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 2270 - 2274
  • [48] Improving explainability results of convolutional neural networks in microscopy images
    Kallipolitis, Athanasios
    Yfantis, Panayiotis
    Maglogiannis, Ilias
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (29): : 21535 - 21553
  • [49] LDConv: Linear deformable convolution for improving convolutional neural networks
    Zhang, Xin
    Song, Yingze
    Song, Tingting
    Yang, Degang
    Ye, Yichen
    Zhou, Jie
    Zhang, Liming
    IMAGE AND VISION COMPUTING, 2024, 149
  • [50] Improving Performance of Convolutional Neural Networks by Separable Filters on GPU
    Kang, Hao-Ping
    Lee, Che-Rung
    EURO-PAR 2015: PARALLEL PROCESSING, 2015, 9233 : 638 - 649