ContextMix: A context-aware data augmentation method for industrial visual inspection systems

被引:1
|
作者
Kim, Hyungmin [1 ,2 ]
Kim, Donghun [1 ]
Ahn, Pyunghwan [3 ]
Suh, Sungho [4 ,5 ]
Cho, Hansang [2 ]
Kim, Junmo [1 ]
机构
[1] Korea Adv Inst Sci & Technol KAIST, Sch Elect Engn, Daejeon 34141, South Korea
[2] Samsung Electromech, Suwon 16674, South Korea
[3] LG AI Res, 30 Magokjungang 10 Ro, Seoul, South Korea
[4] German Res Ctr Artificial Intelligence DFKI, D-67663 Kaiserslautern, Germany
[5] RPTU Kaiserslautern Landau, Dept Comp Sci, Kaiserslautern, Germany
关键词
Data augmentation; Regional dropout; Industrial manufacturing; Inspection systems;
D O I
10.1016/j.engappai.2023.107842
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While deep neural networks have achieved remarkable performance, data augmentation has emerged as a crucial strategy to mitigate overfitting and enhance network performance. These techniques hold particular significance in industrial manufacturing contexts. Recently, image mixing-based methods have been introduced, exhibiting improved performance on public benchmark datasets. However, their application to industrial tasks remains challenging. The manufacturing environment generates massive amounts of unlabeled data on a daily basis, with only a few instances of abnormal data occurrences. This leads to severe data imbalance. Thus, creating well-balanced datasets is not straightforward due to the high costs associated with labeling. Nonetheless, this is a crucial step for enhancing productivity. For this reason, we introduce ContextMix, a method tailored for industrial applications and benchmark datasets. ContextMix generates novel data by resizing entire images and integrating them into other images within the batch. This approach enables our method to learn discriminative features based on varying sizes from resized images and train informative secondary features for object recognition using occluded images. With the minimal additional computation cost of image resizing, ContextMix enhances performance compared to existing augmentation techniques. We evaluate its effectiveness across classification, detection, and segmentation tasks using various network architectures on public benchmark datasets. Our proposed method demonstrates improved results across a range of robustness tasks. Its efficacy in real industrial environments is particularly noteworthy, as demonstrated using the passive component dataset.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Context-Aware Software Ecosystem for Industrial Products
    Tomlein, Matus
    2016 13TH WORKING IEEE/IFIP CONFERENCE ON SOFTWARE ARCHITECTURE (WICSA), 2016, : 279 - 280
  • [32] Correcting the Autocorrect: Context-Aware Typographical Error Correction via Training Data Augmentation
    Shah, Kshitij
    de Melo, Gerard
    PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020), 2020, : 6930 - 6936
  • [33] Context-aware visual exploration of molecular databases
    Di Fatta, Giuseppe
    Fiannaca, Antonino
    Rizzo, Riccardo
    Urso, Alfonso
    Berthold, Michael R.
    Gaglio, Salvatore
    ICDM 2006: SIXTH IEEE INTERNATIONAL CONFERENCE ON DATA MINING, WORKSHOPS, 2006, : 136 - 141
  • [34] Labeling lateral prefrontal sulci using spherical data augmentation and context-aware training
    Lyu, Ilwoo
    Bao, Shuxing
    Hao, Lingyan
    Yao, Jewelia
    Miller, Jacob A.
    Voorhies, Willa
    Taylor, Warren D.
    Bunge, Silvia A.
    Weiner, Kevin S.
    Landman, Bennett A.
    NEUROIMAGE, 2021, 229
  • [35] Simultaneous Visual Context-aware Path Prediction
    Iesaki, Haruka
    Hirakawa, Tsubasa
    Yamashita, Takayoshi
    Fujiyoshi, Hironobu
    Ishii, Yasunori
    Kozuka, Kazuki
    Fujimura, Ryota
    VISAPP: PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL 4: VISAPP, 2020, : 741 - 748
  • [36] A survey on context-aware mobile visual recognition
    Min, Weiqing
    Jiang, Shuqiang
    Wang, Shuhui
    Xu, Ruihan
    Cao, Yushan
    Herranz, Luis
    He, Zhiqiang
    MULTIMEDIA SYSTEMS, 2017, 23 (06) : 647 - 665
  • [37] VISUAL FEATURES FOR CONTEXT-AWARE SPEECH RECOGNITION
    Gupta, Abhinav
    Miao, Yajie
    Neves, Leonardo
    Metze, Florian
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 5020 - 5024
  • [38] A survey on context-aware mobile visual recognition
    Weiqing Min
    Shuqiang Jiang
    Shuhui Wang
    Ruihan Xu
    Yushan Cao
    Luis Herranz
    Zhiqiang He
    Multimedia Systems, 2017, 23 : 647 - 665
  • [39] Data Management for Context-Aware Computing
    Xue, Wenwei
    Pung, Hungkeng
    Ng, Wenlong
    Gu, Tao
    EUC 2008: PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON EMBEDDED AND UBIQUITOUS COMPUTING, VOL 1, MAIN CONFERENCE, 2008, : 492 - +
  • [40] Assessing Context-Aware Data Consistency
    Mylavarapu, Goutam
    Viswanathan, K. Ashwin
    Thomas, Johnson P.
    2019 IEEE/ACS 16TH INTERNATIONAL CONFERENCE ON COMPUTER SYSTEMS AND APPLICATIONS (AICCSA 2019), 2019,