MOOSE Stochastic Tools: A module for performing parallel, memory-efficient in situ stochastic simulations

被引:6
|
作者
Slaughter, Andrew E. [1 ]
Prince, Zachary M. [1 ]
German, Peter [1 ]
Halvic, Ian [1 ,2 ]
Jiang, Wen [1 ]
Spencer, Benjamin W. [1 ]
Dhulipala, Somayajulu L. N. [1 ]
Gaston, Derek R. [1 ]
机构
[1] Idaho Natl Lab, Idaho Falls, ID 83415 USA
[2] Texas A&M Univ, College Stn, TX 77840 USA
关键词
Stochastic; Parallel; Multiphysics; MOOSE;
D O I
10.1016/j.softx.2023.101345
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Stochastic simulations are ubiquitous across scientific disciplines. The Multiphysics Object-Oriented Simulation Environment (MOOSE) includes an optional module - stochastic tools - for implementing stochastic simulations. It implements an efficient and scalable scheme for performing stochastic analysis in memory. It can be used for building meta models to reduce the computational expense of multiphysics problems as well as perform analyses requiring up to millions of stochastic simulations. To illustrate, we have provided an example that trains a proper orthogonal decomposition reduced -basis model. The impact of the module is detailed by explaining how it is being used for failure analysis in nuclear fuel and reducing the computational burden via dynamic meta model training. The module is unique in that it provides the ability to use a single framework for simulations and stochastic analysis, especially for memory intensive problems and intrusive meta modeling methods.(c) 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Embedding memory-efficient stochastic simulators as quantum trajectories
    Elliott, Thomas J.
    Gu, Mile
    PHYSICAL REVIEW A, 2024, 109 (02)
  • [2] Memory-Efficient FPGA Implementation of Stochastic Simulated Annealing
    Shin, Duckgyu
    Onizawa, Naoya
    Gross, Warren J.
    Hanyu, Takahiro
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2023, 13 (01) : 108 - 118
  • [3] Memory-efficient Parallel Tensor Decompositions
    Baskaran, Muthu
    Henretty, Tom
    Pradelle, Benoit
    Langston, M. Harper
    Bruns-Smith, David
    Ezick, James
    Lethin, Richard
    2017 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2017,
  • [4] A Non-deterministic Training Approach for Memory-Efficient Stochastic Neural Networks
    Golbabaei, Babak
    Zhu, Guangxian
    Kan, Yirong
    Zhang, Renyuan
    Nakashima, Yasuhiko
    2023 IEEE 36TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE, SOCC, 2023, : 232 - 237
  • [5] Parallel and Memory-efficient Preprocessing for Metagenome Assembly
    Rengasamy, Vasudevan
    Medvedev, Paul
    Madduri, Kamesh
    2017 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2017, : 283 - 292
  • [6] Parallel Memory-Efficient Processing of BCI Data
    Alexander, Trevor
    Kuh, Anthony
    Hamada, Katsuhiko
    Mori, Hiromu
    Shinoda, Hiroyuki
    Rutkowski, Tomasz
    2014 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA), 2014,
  • [7] A scalable memory-efficient architecture for parallel shared memory switches
    Matthews, Brad
    Elhanany, Itamar
    2007 WORKSHOP ON HIGH PERFORMANCE SWITCHING AND ROUTING, 2007, : 74 - +
  • [8] ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing
    Srinivasan, Gopalakrishnan
    Roy, Kaushik
    FRONTIERS IN NEUROSCIENCE, 2019, 13
  • [9] A Memory-Efficient PITD Method for Multiscale Electromagnetic Simulations
    Wang, Jiawei
    Mao, Minyu
    Xiang, Ru
    Wang, Huifu
    Lian, Haoyu
    IEEE MICROWAVE AND WIRELESS TECHNOLOGY LETTERS, 2024, 34 (08): : 967 - 970
  • [10] Memory-Efficient Pipeline-Parallel DNN Training
    Narayanan, Deepak
    Phanishayee, Amar
    Shi, Kaiyu
    Chen, Xie
    Zaharia, Matei
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139