SFGAN: Unsupervised Generative Adversarial Learning of 3D Scene Flow from the 3D Scene Self

被引:12
|
作者
Wang, Guangming [1 ]
Jiang, Chaokang [2 ]
Shen, Zehang [1 ]
Miao, Yanzi [2 ]
Wang, Hesheng [1 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai Engn Res Ctr Intelligent Control & Manag, Key Lab Marine Intelligent Equipment & Syst,Minis, Key Lab Syst Control & Informat Proc,Dept Automat, Shanghai 200240, Peoples R China
[2] China Univ Min & Technol, Engn Res Ctr Intelligent Control Underground Spac, Minist Educ, Sch Informat & Control Engn,Adv Robot Res Ctr, Xuzhou 221116, Jiangsu, Peoples R China
关键词
3D point clouds; generative adversarial network; scene flow estimation; soft correspondence; unsupervised learning;
D O I
10.1002/aisy.202100197
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Scene flow tracks the 3D motion of each point in adjacent point clouds. It provides fundamental 3D motion perception for autonomous driving and server robot. Although red green blue depth (RGBD) camera or light detection and ranging (LiDAR) capture discrete 3D points in space, the objects and motions usually are continuous in the macroworld. That is, the objects keep themselves consistent as they flow from the current frame to the next frame. Based on this insight, the generative adversarial networks (GAN) is utilized to self-learn 3D scene flow without ground truth. The fake point cloud is synthesized from the predicted scene flow and the point cloud of the first frame. The adversarial training of the generator and discriminator is realized through synthesizing indistinguishable fake point cloud and discriminating the real point cloud and the synthesized fake point cloud. The experiments on Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset show that our method realizes promising results. Just as human, the proposed method can identify the similar local structures of two adjacent frames even without knowing the ground truth scene flow. Then, the local correspondence can be correctly estimated, and further the scene flow is correctly estimated. An interactive preprint version of the article can be found here: .
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Indoor Scene Recognition in 3D
    Huang, Shengyu
    Usvyatsov, Mikhail
    Schindler, Konrad
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 8041 - 8048
  • [32] 3D crime scene reconstruction
    Buck, Ursula
    FORENSIC SCIENCE INTERNATIONAL, 2019, 304
  • [33] 3D scene manipulation with constraints
    Smith, G
    Salzman, T
    Stuerzlinger, W
    VIRTUAL AND AUGMENTED ARCHITECTURE (VAA'01), 2001, : 35 - 46
  • [34] Learning 3D Scene Priors with 2D Supervision
    Nie, Yinyu
    Dai, Angela
    Han, Xiaoguang
    Niessner, Matthias
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 792 - 802
  • [35] Learning to Recover 3D Scene Shape from a Single Image
    Yin, Wei
    Zhang, Jianming
    Wang, Oliver
    Niklaus, Simon
    Mai, Long
    Chen, Simon
    Shen, Chunhua
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 204 - 213
  • [36] Forensic 3D scene reconstruction
    Little, CQ
    Small, DE
    Peters, RR
    Rigdon, JB
    28TH AIPR WORKSHOP: 3D VISUALIZATION FOR DATA EXPLORATION AND DECISION MAKING, 2000, 3905 : 67 - 73
  • [37] Self-Supervised 3D Scene Flow Estimation Guided by Superpoints
    Shen, Yaqi
    Hui, Le
    Xie, Jin
    Yang, Jian
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 5271 - 5280
  • [38] 3D Scene Flow from 4D Light Field Gradients
    Ma, Sizhuo
    Smith, Brandon M.
    Gupta, Mohit
    COMPUTER VISION - ECCV 2018, PT VIII, 2018, 11212 : 681 - 698
  • [39] 3D Scene Reconstruction and Object Recognition for Indoor Scene
    Shen, Yangping
    Manabe, Yoshitsugu
    Yata, Noriko
    INTERNATIONAL WORKSHOP ON ADVANCED IMAGE TECHNOLOGY (IWAIT) 2019, 2019, 11049
  • [40] 3D Scene Management Method Combined with Scene Graphs
    Wang, Xiang
    Shen, Tao
    Hu, Liang
    Guo, Congnan
    Gao, Su
    SENSORS AND MATERIALS, 2022, 34 (01) : 277 - 287