Neural-network learning of SPOD latent dynamics

被引:12
|
作者
Lario, Andrea [1 ]
Maulik, Romit [2 ]
Schmidt, Oliver T. [3 ]
Rozza, Gianluigi [1 ]
Mengaldo, Gianmarco [4 ]
机构
[1] Scuola Int Super Studi Avanzati SISSA, Trieste, TS, Italy
[2] Argonne Natl Lab ANL, Lemont, IL USA
[3] Univ Calif San Diego UCSD, La Jolla, CA USA
[4] Natl Univ Singapore NUS, Singapore, Singapore
关键词
Dynamical systems; Reduced order modeling; Neural networks; Deep learning;
D O I
10.1016/j.jcp.2022.111475
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
We aim to reconstruct the latent space dynamics of high dimensional, quasi-stationary systems using model order reduction via the spectral proper orthogonal decomposition (SPOD). The proposed method is based on three fundamental steps: in the first, once that the mean flow field has been subtracted from the realizations (also referred to as snapshots), we compress the data from a high-dimensional representation to a lower dimensional one by constructing the SPOD latent space; in the second, we build the time-dependent coefficients by projecting the snapshots containing the fluctuations onto the SPOD basis and we learn their evolution in time with the aid of recurrent neural networks; in the third, we reconstruct the high-dimensional data from the learnt lower -dimensional representation. The proposed method is demonstrated on two different test cases, namely, a compressible jet flow, and a geophysical problem known as the Madden-Julian Oscillation. An extensive comparison between SPOD and the equivalent POD-based counterpart is provided and differences between the two approaches are highlighted. The numerical results suggest that the proposed model is able to provide low rank predictions of complex statistically stationary data and to provide insights into the evolution of phenomena characterized by specific range of frequencies. The comparison between POD and SPOD surrogate strategies highlights the need for further work on the characterization of the interplay of error between data reduction techniques and neural network forecasts. (C) 2022 Elsevier Inc. All rights reserved.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] ORTHONET - ORTHOGONAL LATENT VARIABLE NEURAL-NETWORK
    WYTHOFF, BJ
    [J]. CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 1993, 20 (02) : 129 - 148
  • [2] ON OPTIMAL NEURAL-NETWORK LEARNING
    WATKIN, TLH
    [J]. PHYSICA A, 1993, 200 (1-4): : 628 - 635
  • [3] NEURAL-NETWORK MODEL OF THE DYNAMICS OF HUNGER, LEARNING, AND ACTION VIGOR IN MICE
    Venditti, Alberto
    Mirolli, Marco
    Parisi, Domenico
    Baldassarre, Gianluca
    [J]. ARTIFICIAL LIFE AND EVOLUTIONARY COMPUTATION, 2010, : 131 - 142
  • [4] A NEURAL-NETWORK MODEL OF ADAPTIVELY TIMED REINFORCEMENT LEARNING AND HIPPOCAMPAL DYNAMICS
    GROSSBERG, S
    MERRILL, JWL
    [J]. COGNITIVE BRAIN RESEARCH, 1992, 1 (01): : 3 - 38
  • [5] AN OPTOELECTRONIC NEURAL-NETWORK WITH ONLINE LEARNING
    PIGNON, D
    HALL, TJ
    XU, LQ
    RUSS, KB
    CHERRY, SR
    ROBERTS, NC
    PREWETT, P
    HALLOWELL, P
    [J]. OPTICAL COMPUTING, 1995, 139 : 399 - 402
  • [6] SPEEDUP METHODS FOR NEURAL-NETWORK LEARNING
    CHO, SB
    KIM, JH
    [J]. JOURNAL OF SYSTEMS ENGINEERING, 1995, 5 (02): : 91 - 101
  • [7] LEARNING A RULE IN A MULTILAYER NEURAL-NETWORK
    SCHWARZE, H
    [J]. JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1993, 26 (21): : 5781 - 5794
  • [8] A NEURAL-NETWORK ARCHITECTURE FOR INCREMENTAL LEARNING
    SHIOTANI, S
    FUKUDA, T
    SHIBATA, T
    [J]. NEUROCOMPUTING, 1995, 9 (02) : 111 - 130
  • [9] LEARNING IN ANALOG NEURAL-NETWORK HARDWARE
    TAWEL, R
    [J]. COMPUTERS & ELECTRICAL ENGINEERING, 1993, 19 (06) : 453 - 467
  • [10] Local Detection of Communities by Neural-Network Dynamics
    Okamoto, Hiroshi
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2013, 2013, 8131 : 50 - 57