Flexible 3D Object Appearance Observation Based on Pose Regression and Active Motion

被引:3
|
作者
Wang, Shaohu [1 ,2 ]
Qin, Fangbo [1 ,2 ]
Shen, Fei [1 ,2 ]
Zhang, Zhengtao [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Res Ctr Precis Sensing & Control, Inst Automat, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[3] Binzhou Inst Technol, Binzhou City 256601, Shandong, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CASE49997.2022.9926599
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
3D object appearance inspection plays an important role in manufacturing industry. To observe clear images of different parts of a 3D object in a semi-structured scene, camera pose should be properly adjusted to several different viewpoints. In this paper, we propose a flexible appearance observation framework for 3D-shaped objects with 3-DoF pose (2D position and 1D angle) uncertainty. First, we propose 3-DoF Pose Regression Network (PR3Net) based on convolutional neural network (CNN), to estimate the 3-DoF pose of a target 3D object placed on a platform. Considering the data scarcity problem in practical application and the variety of object types, we utilize data synthesis to automatically generate training samples from only one annotated image sample, so that the pose learning can be conducted conveniently. Besides, a semi-supervised fine-tuning method is used to improve the generalization ability by leveraging plenty of unlabeled images. Second, the teachable active motion strategy is designed to enable the inspection robot to observe a 3D object from multiple viewpoints. The human user teaches the standard viewpoints once beforehand. The robot actively moves its camera multiple times according to both the predefined viewpoints and the regressed 3-DoF pose, so that the images of multiple parts of object are collected. The effectiveness of the proposed methods is validated by a series of experiments.
引用
收藏
页码:895 / 900
页数:6
相关论文
共 50 条
  • [1] Appearance based pose estimation of 3D object using support vector regression
    Ando, S
    Kusachi, Y
    Suzuki, A
    Arakawa, K
    2005 International Conference on Image Processing (ICIP), Vols 1-5, 2005, : 141 - 144
  • [2] Active visual sensing of the 3-D pose of a flexible object
    Byun, JE
    Nagata, T
    ROBOTICA, 1996, 14 : 173 - 188
  • [3] APPEARANCE-BASED AND ACTIVE 3D OBJECT RECOGNITION USING VISION
    Trujillo-Romero, F.
    Devy, A.
    VISAPP 2009: PROCEEDINGS OF THE FOURTH INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, VOL 1, 2009, : 417 - +
  • [4] Appearance Based Object Pose Estimation Using Regression Models
    Saito, Mamoru
    Kitaguchi, Katsuhisa
    PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON TEST AUTOMATION AND INSTRUMENTATION, VOL 4, 2008, : 1987 - 1991
  • [5] Appearance Based Object Pose Estimation Using Regression Models
    Saito, Mamoru
    Kitaguchi, Katsuhisa
    2008 PROCEEDINGS OF SICE ANNUAL CONFERENCE, VOLS 1-7, 2008, : 1844 - 1847
  • [6] 3D human pose estimation in motion based on multi-stage regression
    Zhang, Yongtao
    Li, Shuang
    Long, Peng
    DISPLAYS, 2021, 69
  • [7] Estimation of A 3D Object Pose Using The Single Flexible Template
    Sari, Dewi Mutiara
    Putranti, Vina Wahyuni Eka
    2018 INTERNATIONAL ELECTRONICS SYMPOSIUM ON KNOWLEDGE CREATION AND INTELLIGENT COMPUTING (IES-KCIC), 2018, : 29 - 36
  • [8] Fusion of 3D and appearance models for fast object detection and pose estimation
    Najafi, H
    Genc, Y
    Navab, N
    COMPUTER VISION - ACCV 2006, PT II, 2006, 3852 : 415 - 426
  • [9] Facial Motion Capture with 3D Active Appearance Models
    Darujati, Cahyo
    Hariadi, Mochammad
    PROCEEDINGS OF 2013 3RD INTERNATIONAL CONFERENCE ON INSTRUMENTATION, COMMUNICATIONS, INFORMATION TECHNOLOGY, AND BIOMEDICAL ENGINEERING (ICICI-BME), 2013, : 59 - 64
  • [10] Assembly Manipulation Understanding Based on 3D Object Pose Estimation and Human Motion Estimation
    Yamazaki, Kimitoshi
    Higashide, Taichi
    Tanaka, Daisuke
    Nagahama, Kotaro
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), 2018, : 802 - 807