Learning deep domain-agnostic features from synthetic renders for industrial visual inspection

被引:3
|
作者
Abubakr, Abdelrahman G. [1 ]
Jovancevic, Igor [1 ,2 ]
Mokhtari, Nour Islam [1 ,3 ]
Ben Abdallah, Hamdi [3 ]
Orteu, Jean-Jose [3 ]
机构
[1] Diota, Labege, France
[2] Univ Montenegro, Fac Nat Sci & Math, Podgorica, Montenegro
[3] Univ Toulouse, CNRS, Inst Clement Ader, IMT Mines Albi,INSA,UPS,ISAE, Albi, France
关键词
deep learning; domain adaptation; domain randomization; augmented autoencoders; synthetic rendering; industrial visual inspection; RECOGNITION; CLASSIFICATION; IMAGES;
D O I
10.1117/1.JEI.31.5.051604
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep learning has resulted in a huge advancement in computer vision. However, deep models require an enormous amount of manually annotated data, which is a laborious and time-consuming task. Large amounts of images demand the availability of target objects for acquisition. This is a kind of luxury we usually do not have in the context of automatic inspection of complex mechanical assemblies, such as in the aircraft industry. We focus on using deep convolutional neural networks (CNN) for automatic industrial inspection of mechanical assemblies, where training images are limited and hard to collect. Computer-aided design model (CAD) is a standard way to describe mechanical assemblies; for each assembly part we have a three-dimensional CAD model with the real dimensions and geometrical properties. Therefore, rendering of CAD models to generate synthetic training data is an attractive approach that comes with perfect annotations. Our ultimate goal is to obtain a deep CNN model trained on synthetic renders and deployed to recognize the presence of target objects in never-before-seen real images collected by commercial RGB cameras. Different approaches are adopted to close the domain gap between synthetic and real images. First, the domain randomization technique is applied to generate synthetic data for training. Second, domain invariant features are utilized while training, allowing to use the trained model directly in the target domain. Finally, we propose a way to learn better representative features using augmented autoencoders, getting performance close to our baseline models trained with real images. (c) 2022 SPIE and IS&T
引用
收藏
页数:26
相关论文
共 50 条
  • [1] On learning deep domain-invariant features from 2D synthetic images for industrial visual inspection
    Abubakr, Abdelrahman G.
    Jovancevic, Igor
    Mokhtari, Nour Islam
    Ben Abdallah, Hamdi
    Orteu, Jean-Jose
    [J]. FIFTEENTH INTERNATIONAL CONFERENCE ON QUALITY CONTROL BY ARTIFICIAL VISION, 2021, 11794
  • [2] DaCo: domain-agnostic contrastive learning for visual place recognition
    Hao Ren
    Ziqiang Zheng
    Yang Wu
    Hong Lu
    [J]. Applied Intelligence, 2023, 53 : 21827 - 21840
  • [3] DaCo: domain-agnostic contrastive learning for visual place recognition
    Ren, Hao
    Zheng, Ziqiang
    Wu, Yang
    Lu, Hong
    [J]. APPLIED INTELLIGENCE, 2023, 53 (19) : 21827 - 21840
  • [4] Deep Learning Model Portability for Domain-Agnostic Device Fingerprinting
    Gaskin, Jared
    Elmaghbub, Abdurrahman
    Hamdaoui, Bechir
    Wong, Weng-Keen
    [J]. IEEE ACCESS, 2023, 11 : 86801 - 86823
  • [5] Towards Domain-Agnostic Contrastive Learning
    Verma, Vikas
    Minh-Thang Luong
    Kawaguchi, Kenji
    Hieu Pham
    Le, Quoc, V
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139 : 7544 - 7554
  • [6] Domain-Agnostic Contrastive Representations for Learning from Label Proportions
    Nandy, Jay
    Saket, Rishi
    Jain, Prateek
    Chauhan, Jatin
    Ravindran, Balaraman
    Raghuveer, Aravindan
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 1542 - 1551
  • [7] A Needle in a Haystack: Distinguishable Deep Neural Network Features for Domain-Agnostic Device Fingerprinting
    Elmaghbub, Abdurrahman
    Hamdaoui, Bechir
    [J]. 2023 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS, 2023,
  • [8] A domain-agnostic approach for characterization of lifelong learning systems
    Baker, Megan M.
    New, Alexander
    Aguilar-Simon, Mario
    Al-Halah, Ziad
    Arnold, Sebastien M. R.
    Ben-Iwhiwhu, Ese
    Brna, Andrew P.
    Brooks, Ethan
    Brown, Ryan C.
    Daniels, Zachary
    Daram, Anurag
    Delattre, Fabien
    Dellana, Ryan
    Eaton, Eric
    Fu, Haotian
    Grauman, Kristen
    Hostetler, Jesse
    Iqbal, Shariq
    Kent, Cassandra
    Ketz, Nicholas
    Kolouri, Soheil
    Konidaris, George
    Kudithipudi, Dhireesha
    Learned-Miller, Erik
    Lee, Seungwon
    Littman, Michael L.
    Madireddy, Sandeep
    Mendez, Jorge A.
    Nguyen, Eric Q.
    Piatko, Christine
    Pilly, Praveen K.
    Raghavan, Aswin
    Rahman, Abrar
    Ramakrishnan, Santhosh Kumar
    Ratzlaff, Neale
    Soltoggio, Andrea
    Stone, Peter
    Sur, Indranil
    Tang, Zhipeng
    Tiwari, Saket
    Vedder, Kyle
    Wang, Felix
    Xu, Zifan
    Yanguas-Gil, Angel
    Yedidsion, Harel
    Yu, Shangqun
    Vallabha, Gautam K.
    [J]. NEURAL NETWORKS, 2023, 160 : 274 - 296
  • [9] Interpretable domain-informed and domain-agnostic features for supervised and unsupervised learning on building energy demand data
    Canaydin, Ada
    Fu, Chun
    Balint, Attila
    Khalil, Mohamad
    Miller, Clayton
    Kazmi, Hussain
    [J]. APPLIED ENERGY, 2024, 360
  • [10] DOMAIN-AGNOSTIC VIDEO PREDICTION FROM MOTION SELECTIVE KERNELS
    Prinet, Veronique
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 4205 - 4209