A FULLY BAYESIAN GRADIENT-FREE SUPERVISED DIMENSION REDUCTION METHOD USING GAUSSIAN PROCESSES

被引:0
|
作者
Gautier, Raphael [1 ]
Pandita, Piyush [2 ]
Ghosh, Sayan [2 ]
Mavris, Dimitri [1 ]
机构
[1] Georgia Inst Technol, Aerosp Syst Design Lab, Atlanta, GA 30308 USA
[2] GE Res, Probabilist Design, Niskayuna, NY 12309 USA
关键词
surrogate modeling; high-dimensional input space; dimensionality reduction; uncertainty quantification; active subspace; Bayesian inference; Gaussian process regression; RIDGE FUNCTIONS; REGRESSION; SUBSPACE; DESIGN; MODELS;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Modern day engineering problems are ubiquitously characterized by sophisticated computer codes that map parameters or inputs to an underlying physical process. In other situations, experimental setups are used to model the physical process in a laboratory, ensuring high precision while being costly in materials and logistics. In both scenarios, only a limited amount of data can be generated by querying the expensive information source at a finite number of inputs or designs. This problem is compounded further in the presence of a high-dimensional input space. State-of-the-art parameter space dimension reduction methods, such as active subspace, aim to identify a subspace of the original input space that is sufficient to explain the output response. These methods are restricted by their reliance on gradient evaluations or copious data, making them inadequate for expensive problems without direct access to gradients. The proposed methodology is gradient-free and fully Bayesian, as it quantifies uncertainty in both the low-dimensional subspace and the surrogate model parameters. This enables a full quantification of epistemic uncertainty and robustness to limited data availability. It is validated on multiple datasets from engineering and science and compared to two other state-of-the-art methods based on four aspects: (a) recovery of the active subspace, (b) deterministic prediction accuracy, (c) probabilistic prediction accuracy, and (d) training time. The comparison shows that the proposed method improves the active subspace recovery and predictive accuracy, in both the deterministic and probabilistic sense, when only few a model observations are available for training, at the cost of increased training time.
引用
收藏
页码:19 / 51
页数:33
相关论文
共 50 条
  • [1] A Scalable Gradient-Free Method for Bayesian Experimental Design with Implicit Models
    Zhang, Jiaxin
    Bi, Sirui
    Zhang, Guannan
    [J]. 24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [2] Gradient-Free Construction of Active Subspaces for Dimension Reduction in Complex Models with Applications to Neutronics
    Coleman, Kayla D.
    Lewis, Allison
    Smith, Ralph C.
    Williams, Brian
    Morris, Max
    Khuwaileh, Bassam
    [J]. SIAM-ASA JOURNAL ON UNCERTAINTY QUANTIFICATION, 2019, 7 (01): : 117 - 142
  • [3] An adaptive Bayesian approach to gradient-free global optimization
    Yu, Jianneng
    Morozov, Alexandre, V
    [J]. NEW JOURNAL OF PHYSICS, 2024, 26 (02):
  • [4] Multiobjective Optimization for Turbofan Engine Using Gradient-Free Method
    Chen, Ran
    Li, Yuzhe
    Sun, Xi-Ming
    Chai, Tianyou
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2024, 54 (07): : 4345 - 4357
  • [5] Learning Supervised PageRank with Gradient-Based and Gradient-Free Optimization Methods
    Bogolubsky, Lev
    Gusev, Gleb
    Raigorodskii, Andrei
    Tikhonov, Aleksey
    Zhukovskii, Maksim
    Dvurechensky, Pavel
    Gasnikov, Alexander
    Nesterov, Yurii
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [6] Bayesian Active Learning with Fully Bayesian Gaussian Processes
    Riis, Christoffer
    Antunes, Francisco
    Huttel, Frederik Boe
    Azevedo, Carlos Lima
    Pereira, Francisco Camara
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [7] Gradient-free method for nonsmooth distributed optimization
    Li, Jueyou
    Wu, Changzhi
    Wu, Zhiyou
    Long, Qiang
    [J]. JOURNAL OF GLOBAL OPTIMIZATION, 2015, 61 (02) : 325 - 340
  • [8] Gradient-free method for nonsmooth distributed optimization
    Jueyou Li
    Changzhi Wu
    Zhiyou Wu
    Qiang Long
    [J]. Journal of Global Optimization, 2015, 61 : 325 - 340
  • [9] Randomized Gradient-Free Distributed Algorithms through Sequential Gaussian Smoothing
    Chen, Xing-Min
    Gao, Chao
    Zhang, Ming-Kun
    Qin, Yi-Da
    [J]. PROCEEDINGS OF THE 36TH CHINESE CONTROL CONFERENCE (CCC 2017), 2017, : 8407 - 8412
  • [10] INCREMENTAL GRADIENT-FREE METHOD FOR NONSMOOTH DISTRIBUTED OPTIMIZATION
    Li, Jueyou
    Li, Guoquan
    Wu, Zhiyou
    Wu, Changzhi
    Wang, Xiangyu
    Lee, Jae-Myung
    Jung, Kwang-Hyo
    [J]. JOURNAL OF INDUSTRIAL AND MANAGEMENT OPTIMIZATION, 2017, 13 (04) : 1841 - 1857