Correlated Bayesian Co-Training for Virtual Metrology

被引:2
|
作者
Nguyen, Cuong [1 ]
Li, Xin [2 ]
Blanton, Shawn [3 ]
Li, Xiang [4 ]
机构
[1] Inst Infocomm Res, Machine Intellect Dept, Singapore, Singapore
[2] Duke Univ, Dept Elect & Comp Engn, Durham, NC 27708 USA
[3] Carnegie Mellon Univ, Dept Elect & Comp Engn, Pittsburgh, PA 15213 USA
[4] Singapore Inst Mfg Technol, Mfg Syst Res Grp, Singapore, Singapore
关键词
Co-training; multi-state regression; semisupervised learning; REGRESSION;
D O I
10.1109/TSM.2022.3217350
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
A rising challenge in manufacturing data analysis is training robust regression models using limited labeled data. In this work, we investigate a semi-supervised regression scenario, where a manufacturing process operates on multiple mutually correlated states. We exploit this inter-state correlation to improve regression accuracy by developing a novel co-training method, namely Correlated Bayesian Co-training (CBCT). CBCT adopts a block Sparse Bayesian Learning framework to enhance multiple individual regression models which share the same support. Additionally, CBCT casts a unified prior distribution on both the coefficient magnitude and the inter-state correlation. The model parameters are estimated using maximum-a-posteriori estimation (MAP), while hyper-parameters are estimated using the expectation-maximization (EM) algorithm. Experimental results from two industrial examples shows that CBCT successfully leverages inter-state correlation to reduce the modeling error by up to 79.40%, compared to other conventional approaches. This suggests that CBCT is of great value to multi-state manufacturing applications.
引用
收藏
页码:28 / 36
页数:9
相关论文
共 50 条
  • [31] Multi-Label Co-Training
    Xing, Yuying
    Yu, Guoxian
    Domeniconi, Carlotta
    Wang, Jun
    Zhang, Zili
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2882 - 2888
  • [32] Disagreement-Based Co-Training
    Tanha, Jafar
    van Someren, Maarten
    Afsarmanesh, Hamideh
    [J]. 2011 23RD IEEE INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2011), 2011, : 803 - 810
  • [33] A collaborative ability measurement for co-training
    Shen, D
    Zhang, J
    Su, J
    Zhou, GD
    Tan, CL
    [J]. NATURAL LANGUAGE PROCESSING - IJCNLP 2004, 2005, 3248 : 436 - 445
  • [34] Co-Training an Observer and an Evading Target
    Brandenburger, Andre
    Hoffmann, Folker
    Charlish, Alexander
    [J]. 2021 IEEE 24TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2021, : 101 - 108
  • [35] Analyzing co-training style algorithms
    Wang, Wei
    Zhou, Zhi-Hua
    [J]. MACHINE LEARNING: ECML 2007, PROCEEDINGS, 2007, 4701 : 454 - +
  • [36] Applying Co-Training to reference resolution
    Müller, C
    Rapp, S
    Strube, M
    [J]. 40TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, PROCEEDINGS OF THE CONFERENCE, 2002, : 352 - 359
  • [37] Contrastive Co-training for Diversified Recommendation
    Ma, Xiyao
    Hu, Qian
    Gao, Zheng
    AbdelHady, Mohamed
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [38] Co-training study for Online Regression
    Sousa, Ricardo
    Gama, Joao
    [J]. 33RD ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, 2018, : 529 - 531
  • [39] CoRec: A Co-Training Approach for Recommender Systems
    da Costa, Arthur F.
    Manzato, Marcelo G.
    Campello, Ricardo J. G. B.
    [J]. 33RD ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, 2018, : 696 - 703
  • [40] Automatic Image Annotation Based on Co-training
    Ke, Xiao
    Chen, Guolong
    [J]. JOURNAL OF ALGORITHMS & COMPUTATIONAL TECHNOLOGY, 2014, 8 (01) : 1 - 16