Neural networks can learn to utilize correlated auxiliary noise

被引:6
|
作者
Ahmadzadegan, Aida [1 ,2 ,3 ]
Simidzija, Petar [4 ]
Li, Ming [5 ]
Kempf, Achim [1 ,3 ,6 ]
机构
[1] Perimeter Inst Theoret Phys, Waterloo, ON N2L 2Y5, Canada
[2] ForeQast Technol Ltd, Waterloo, ON N2L 5M1, Canada
[3] Univ Waterloo, Dept Appl Math, Waterloo, ON N2L 3G1, Canada
[4] Univ British Columbia, Dept Phys & Astron, Vancouver, BC V6T 1Z4, Canada
[5] Univ Waterloo, Cheriton Sch Comp Sci, Waterloo, ON N2L 3G1, Canada
[6] Univ Waterloo, Inst Quantum Comp, Waterloo, ON N2L 3G1, Canada
基金
加拿大自然科学与工程研究理事会; 澳大利亚研究理事会;
关键词
D O I
10.1038/s41598-021-00502-4
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
We demonstrate that neural networks that process noisy data can learn to exploit, when available, access to auxiliary noise that is correlated with the noise on the data. In effect, the network learns to use the correlated auxiliary noise as an approximate key to decipher its noisy input data. An example of naturally occurring correlated auxiliary noise is the noise due to decoherence. Our results could, therefore, also be of interest, for example, for machine-learned quantum error correction.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Neural networks can learn to utilize correlated auxiliary noise
    Aida Ahmadzadegan
    Petar Simidzija
    Ming Li
    Achim Kempf
    Scientific Reports, 11
  • [2] Can neural networks learn finite elements?
    Novo, Julia
    Terres, Eduardo
    JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2025, 453
  • [3] Convolutional Neural Networks Regularized by Correlated Noise
    Dutta, Shamak
    Tripp, Bryan
    Taylor, Graham W.
    2018 15TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV), 2018, : 375 - 382
  • [4] How easily can neural networks learn relativity?
    Chitturi, Kartik
    Onyisi, Peter
    18TH INTERNATIONAL WORKSHOP ON ADVANCED COMPUTING AND ANALYSIS TECHNIQUES IN PHYSICS RESEARCH (ACAT2017), 2018, 1085
  • [5] Neural networks can learn to approximate autonomous flows
    Rodriguez, Jose M.
    Garzon, Max H.
    FLUIDS AND WAVES: RECENT TRENDS IN APPLIED ANALYSIS, 2007, 440 : 197 - 206
  • [6] Neural Networks can Learn Representations with Gradient Descent
    Damian, Alex
    Lee, Jason D.
    Soltanolkotabi, Mahdi
    CONFERENCE ON LEARNING THEORY, VOL 178, 2022, 178
  • [7] Convolutional Neural Networks Are Not Invariant to Translation, but They Can Learn to Be
    Biscione, Valerio
    Bowers, Jeffrey S.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [8] Convolutional neural networks are not invariant to translation, but they can learn to be
    Biscione, Valerio
    Bowers, Jeffrey S.
    Journal of Machine Learning Research, 2021, 22
  • [9] Can recurrent neural networks learn natural language grammars?
    Lawrence, S
    Giles, CL
    Fong, S
    ICNN - 1996 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS. 1-4, 1996, : 1853 - 1858
  • [10] Can SGD Learn Recurrent Neural Networks with Provable Generalization?
    Allen-Zhu, Zeyuan
    Li, Yuanzhi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32