Adaptive sampling design for multi-task learning of Gaussian processes in manufacturing

被引:4
|
作者
Mehta, Manan [1 ]
Shao, Chenhui [1 ]
机构
[1] Univ Illinois, Dept Mech Sci & Engn, Urbana, IL 61801 USA
关键词
Gaussian process; Multi-task learning; Transfer learning; Adaptive sampling; Optimal experimental design; Active learning; Surface shape prediction; ENGINEERING DESIGN; SURFACE; SIMULATION; PREDICTION; SMART;
D O I
10.1016/j.jmsy.2021.09.006
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Approximation models (or surrogate models) have been widely used in engineering problems to mitigate the cost of running expensive experiments or simulations. Gaussian processes (GPs) are a popular tool used to construct these models due to their flexibility and computational tractability. The accuracy of these models is a strong function of the density and locations of the sampled points in the parametric space used for training. Previously, multi-task learning (MTL) has been used to learn similar-but-not-identical tasks together, thus increasing the effective density of training points. Also, several adaptive sampling strategies have been developed to identify regions of interest for intelligent sampling in single-task learning of GPs. While both these methods have addressed the density and location constraint separately, sampling design approaches for MTL are lacking. In this paper, we formulate an adaptive sampling strategy for MTL of GPs, thereby further improving data efficiency and modeling performance in GP. To this end, we develop variance measures for an MTL framework to effectively identify optimal sampling locations while learning multiple tasks simultaneously. We demonstrate the effectiveness of the proposed method using a case study on a real-world engine surface dataset. We observe that the proposed method leverages both MTL and intelligent sampling to significantly outperform state-of-the-art methods which use either approach separately. The developed sampling design strategy is readily applicable to many problems in various fields.
引用
收藏
页码:326 / 337
页数:12
相关论文
共 50 条
  • [21] Neural multi-task learning in drug design
    Allenspach, Stephan
    Hiss, Jan A.
    Schneider, Gisbert
    [J]. NATURE MACHINE INTELLIGENCE, 2024, 6 (02) : 124 - 137
  • [22] Multi-task Sparse Structure Learning with Gaussian Copula Models
    Goncalves, Andre R.
    Von Zuben, Fernando J.
    Banerjee, Arindam
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2016, 17 : 1 - 30
  • [23] Multi-task gradient descent for multi-task learning
    Lu Bai
    Yew-Soon Ong
    Tiantian He
    Abhishek Gupta
    [J]. Memetic Computing, 2020, 12 : 355 - 369
  • [24] Multi-task gradient descent for multi-task learning
    Bai, Lu
    Ong, Yew-Soon
    He, Tiantian
    Gupta, Abhishek
    [J]. MEMETIC COMPUTING, 2020, 12 (04) : 355 - 369
  • [25] Robust Estimator based Adaptive Multi-Task Learning
    Zhu, Peiyuan
    Chen, Cailian
    He, Jianping
    Zhu, Shanying
    [J]. 2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 740 - 747
  • [26] Adaptive multi-task learning for speech to text translation
    Feng, Xin
    Zhao, Yue
    Zong, Wei
    Xu, Xiaona
    [J]. EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2024, 2024 (01):
  • [27] Episodic Multi-Task Learning with Heterogeneous Neural Processes
    Shen, Jiayi
    Zhen, Xiantong
    Wang, Qi
    Worring, Marcel
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [28] Deep Multi-task Gaussian Processes for Survival Analysis with Competing Risks
    Alaa, Ahmed M.
    van der Schaar, Mihaela
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [29] Prioritized Sampling with Intrinsic Motivation in Multi-Task Reinforcement Learning
    D'Eramo, Carlo
    Chalvatzaki, Georgia
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [30] Learning Rates of Regularized Regression With Multiple Gaussian Kernels for Multi-Task Learning
    Xu, Yong-Li
    Li, Xiao-Xing
    Chen, Di-Rong
    Li, Han-Xiong
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (11) : 5408 - 5418