Spatial target vision pose measurement based on 3D model

被引:0
|
作者
Yu K. [1 ]
Cong M. [1 ]
Dai W. [1 ]
机构
[1] Research Center for Space Optical Engineering, Harbin Institute of Technology, Harbin
关键词
3D model; Contour feature; Pose measurement; Spatial target; Vision navigation;
D O I
10.19650/j.cnki.cjsi.J1904725
中图分类号
学科分类号
摘要
The application scope of traditional spatial cooperative target vision measurement technology is limited by the installation status of the cooperative markers. Taking the edge contour as the cooperative feature, the 3D structural model is used to construct the cooperative relationship and a spatial cooperative target pose vision measurement method with wider application range is proposed. The method generates 2D target feature templates with different observation orientations, which are used to perform searching and matching with the measurement image. The contour orientation features are used to evaluate the matching degree, and the target pose parameters are solved from the optimal matching results. The offline preprocessing strategy and two-stage image pyramid search optimization method are used to accelerate the processing. The digital simulation test and semi-physical simulation test were conducted to verify the accuracy and stability of the method. The absolute error of position measurement in vertical optical axis direction is better than 2 mm, and the relative error of position measurement in parallel optical axis direction is better than 0.7%. The absolute error of attitude measurement is better than 0.2° and the single pose measurement takes less than 0.5 s. The proposed method can meet the requirement of spatial target navigation. © 2019, Science Press. All right reserved.
引用
收藏
页码:179 / 188
页数:9
相关论文
共 25 条
  • [1] Liu T., Qiao L.Y., Luo T.N., Et al., Non-cooperative pose estimation for cubesat based on point set registration, Chinese Journal of Scientific Instrument, 37, 10, pp. 2316-2323, (2016)
  • [2] Miao X.K., Zhu F., Hao Y.M., Et al., Vision pose measurement for non-cooperative space vehicles based on solar panel component, Chinese High Technology Letters, 23, 4, pp. 400-406, (2013)
  • [3] Yu K., Cong M.Y., Duan J.J., Et al., Monocular visual navigation method for capture point of docking ring, Chinese Journal of Scientific Instrument, 39, 12, pp. 228-236, (2018)
  • [4] Hu H., Wang D., Gao H., Et al., Vision-based position and pose determination of non-cooperative target for on-orbit servicing, Multimedia Tools and Applications, pp. 1-14, (2018)
  • [5] Wang D.Y., Hu Q.Y., Hu H.D., Et al., Review of autonomous relative navigation for non-cooperative spacecraft, Control Theory & Applications, 35, 10, pp. 1392-1404, (2018)
  • [6] Jankovic M., Paul J., Kirchner F., GNC architecture for autonomous robotic capture of a non-cooperative target: preliminary concept design, Advances in Space Research, 57, 8, pp. 1715-1736, (2016)
  • [7] Training A., European proximity operations simulator 2.0 (EPOS)-A robotic-based rendezvous and docking simulator, Journal of Large-Scale Research Facilities, 3, (2017)
  • [8] Hao G.T., Du X.P., Song J.J., Relative pose estimation of space tumbling non-cooperative target based on vision-only SLAM, Journal of Astronautics, 36, 6, pp. 706-714, (2015)
  • [9] Quan L., Lan Z., Linear n-point camera pose determination, IEEE Transactions on Pattern Analysis and Machine Intelligence, 21, 8, pp. 774-780, (1999)
  • [10] Liu Q.H., Gong D.Z., Hua B.C., Et al., New generation camera-type rendezvous and docking sensor, Aerospace Control and Application, 44, 2, pp. 56-61, (2018)