Learning deep domain-agnostic features from synthetic renders for industrial visual inspection

被引:3
|
作者
Abubakr, Abdelrahman G. [1 ]
Jovancevic, Igor [1 ,2 ]
Mokhtari, Nour Islam [1 ,3 ]
Ben Abdallah, Hamdi [3 ]
Orteu, Jean-Jose [3 ]
机构
[1] Diota, Labege, France
[2] Univ Montenegro, Fac Nat Sci & Math, Podgorica, Montenegro
[3] Univ Toulouse, CNRS, Inst Clement Ader, IMT Mines Albi,INSA,UPS,ISAE, Albi, France
关键词
deep learning; domain adaptation; domain randomization; augmented autoencoders; synthetic rendering; industrial visual inspection; RECOGNITION; CLASSIFICATION; IMAGES;
D O I
10.1117/1.JEI.31.5.051604
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep learning has resulted in a huge advancement in computer vision. However, deep models require an enormous amount of manually annotated data, which is a laborious and time-consuming task. Large amounts of images demand the availability of target objects for acquisition. This is a kind of luxury we usually do not have in the context of automatic inspection of complex mechanical assemblies, such as in the aircraft industry. We focus on using deep convolutional neural networks (CNN) for automatic industrial inspection of mechanical assemblies, where training images are limited and hard to collect. Computer-aided design model (CAD) is a standard way to describe mechanical assemblies; for each assembly part we have a three-dimensional CAD model with the real dimensions and geometrical properties. Therefore, rendering of CAD models to generate synthetic training data is an attractive approach that comes with perfect annotations. Our ultimate goal is to obtain a deep CNN model trained on synthetic renders and deployed to recognize the presence of target objects in never-before-seen real images collected by commercial RGB cameras. Different approaches are adopted to close the domain gap between synthetic and real images. First, the domain randomization technique is applied to generate synthetic data for training. Second, domain invariant features are utilized while training, allowing to use the trained model directly in the target domain. Finally, we propose a way to learn better representative features using augmented autoencoders, getting performance close to our baseline models trained with real images. (c) 2022 SPIE and IS&T
引用
收藏
页数:26
相关论文
共 50 条
  • [31] Visual Product Inspection Based on Deep Learning Methods
    Kuric, Ivan
    Kandera, Matej
    Klarak, Jaromir
    Ivanov, Vitalii
    Wiecek, Dariusz
    [J]. ADVANCED MANUFACTURING PROCESSES (INTERPARTNER-2019), 2020, : 148 - 156
  • [32] Deep Learning based Visual Quality Inspection for Industrial Assembly Line Production using Normalizing Flows
    Maack, Robert F.
    Tercan, Hasan
    Meisen, Tobias
    [J]. 2022 IEEE 20TH INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS (INDIN), 2022, : 329 - 334
  • [33] Personalized Meta-Learning for Domain Agnostic Learning from Demonstration
    Schrum, Mariah L.
    Hedlund-Botti, Erin
    Gombolay, Matthew C.
    [J]. PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), 2022, : 1179 - 1181
  • [34] Shuffle-Then-Assemble: Learning Object-Agnostic Visual Relationship Features
    Yang, Xu
    Zhang, Hanwang
    Cai, Jianfei
    [J]. COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 : 38 - 54
  • [35] Incremental learning of concept drift in Multiple Instance Learning for industrial visual inspection
    Mera, Carlos
    Orozco-Alzate, Mauricio
    Branch, John
    [J]. COMPUTERS IN INDUSTRY, 2019, 109 : 153 - 164
  • [36] Deep Clustering for Unsupervised Learning of Visual Features
    Caron, Mathilde
    Bojanowski, Piotr
    Joulin, Armand
    Douze, Matthijs
    [J]. COMPUTER VISION - ECCV 2018, PT XIV, 2018, 11218 : 139 - 156
  • [37] Study of Visual Inspection for Liquid Pouches Using Deep Learning
    Hasegawa, Makoto
    Kogure, Hidenori
    Dobashi, Hironori
    [J]. 35TH INTERNATIONAL TECHNICAL CONFERENCE ON CIRCUITS/SYSTEMS, COMPUTERS AND COMMUNICATIONS (ITC-CSCC 2020), 2020, : 426 - 430
  • [38] An Automatic HFO Detection Method Combining Visual Inspection Features with Multi-Domain Features
    Xiaochen Liu
    Lingli Hu
    Chenglin Xu
    Shuai Xu
    Shuang Wang
    Zhong Chen
    Jizhong Shen
    [J]. Neuroscience Bulletin, 2021, 37 : 777 - 788
  • [39] An Automatic HFO Detection Method Combining Visual Inspection Features with Multi-Domain Features
    Liu, Xiaochen
    Hu, Lingli
    Xu, Chenglin
    Xu, Shuai
    Wang, Shuang
    Chen, Zhong
    Shen, Jizhong
    [J]. NEUROSCIENCE BULLETIN, 2021, 37 (06) : 777 - 788
  • [40] Deep Domain Adversarial Learning for Species- Agnostic Classification of Histologic Subtypes of Osteosarcoma
    Patkar, Sushant
    Beck, Jessica
    Harmon, Stephanie
    Mazcko, Christina
    Turkbey, Baris
    Choyke, Peter
    Brown, G. Thomas
    LeBlanc, Amy
    [J]. AMERICAN JOURNAL OF PATHOLOGY, 2023, 193 (01): : 60 - 72