Segmentation and Recognition Using Structure from Motion Point Clouds

被引:493
|
作者
Brostow, Gabriel J. [1 ]
Shotton, Jamie [2 ]
Fauqueur, Julien [3 ]
Cipolla, Roberto [3 ]
机构
[1] UCL, London WC1E 6BT, England
[2] Microsoft Res, Cambridge, England
[3] Univ Cambridge, Cambridge, England
关键词
D O I
10.1007/978-3-540-88682-2_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose an algorithm for semantic segmentation based on 3D point clouds derived from ego-motion. We motivate five simple cues designed to model specific patterns of motion and 3D world structure that vary with object category. We introduce features that project the 3D cues back to the 2D image plane while modeling spatial layout and context. A randomized decision forest combines many such features to achieve a coherent 2D segmentation and recognize the object categories present. Our main contribution is to show how semantic segmentation is possible based solely on motion-derived 3D world structure. Our method works well on sparse, noisy point clouds, and unlike existing approaches, does not need appearance-based descriptors. Experiments were performed on a challenging new video database containing sequences filmed from a moving car in daylight and at dusk. The results confirm that indeed, accurate segmentation and recognition are possible using only motion and 3D world structure. Further, we show that the motion-derived information complements an existing state-of-the-art appearance-based method, improving both qualitative and quantitative performance.
引用
收藏
页码:44 / +
页数:3
相关论文
共 50 条
  • [21] Height estimation of sugarcane using an unmanned aerial system (UAS) based on structure from motion (SfM) point clouds
    Wachholz De Souza, Carlos Henrique
    Camargo Lamparelli, Rubens Augusto
    Rocha, Jansle Vieira
    Graziano Magalhaes, Paulo Sergio
    [J]. INTERNATIONAL JOURNAL OF REMOTE SENSING, 2017, 38 (8-10) : 2218 - 2230
  • [22] ZERO-SHOT MOTION PATTERN RECOGNITION FROM 4D POINT-CLOUDS
    Salami, Dariush
    Sigg, Stephan
    [J]. 2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [23] Recognition and reconstruction of developable surfaces from point clouds
    Peternell, M
    [J]. GEOMETRIC MODELING AND PROCESSING 2004, PROCEEDINGS, 2004, : 301 - 310
  • [24] Practical Usefulness of Structure from Motion (SfM) Point Clouds Obtained from Different Consumer Cameras
    Ingwer, Patrick
    Gassen, Fabian
    Puest, Stefan
    Duhn, Melanie
    Schaelicke, Marten
    Mueller, Katja
    Ruhm, Heiko
    Rettig, Josephin
    Hasche, Eberhard
    Fischer, Arno
    Creutzburg, Reiner
    [J]. Mobile Devices and Multimedia: Enabling Technologies, Algorithms, and Applications 2015, 2015, 9411
  • [25] Assessing geoaccuracy of structure from motion point clouds from long-range image collections
    Nilosek, David
    Walvoord, Derek J.
    Salvaggio, Carl
    [J]. OPTICAL ENGINEERING, 2014, 53 (11)
  • [26] Automatic planar shape segmentation from indoor point clouds
    Shui, Wuyang
    Liu, Jin
    Ren, Pu
    Maddock, Steve
    Zhou, Mingquan
    [J]. PROCEEDINGS VRCAI 2016: 15TH ACM SIGGRAPH CONFERENCE ON VIRTUAL-REALITY CONTINUUM AND ITS APPLICATIONS IN INDUSTRY, 2016, : 363 - 372
  • [27] Automatic segmentation and classification of BIM elements from point clouds
    Romero-Jaren, R.
    Arranz, J. J.
    [J]. AUTOMATION IN CONSTRUCTION, 2021, 124
  • [28] Research on segmentation of pear shape from unorganized point clouds
    Yang, Hui-Jun
    He, Dong-Jian
    Li, Linhao
    Jiang, Shao-Hua
    [J]. He, D.-J. (hdj168@nwsuaf.edu.cn), 1600, Academy Publisher (08) : 394 - 401
  • [29] Semantic segmentation of multimodal point clouds from the railway context
    Dibari, P.
    Nitti, M.
    Maglietta, R.
    Castellano, G.
    Dimauro, G.
    Reno, V
    [J]. MULTIMODAL SENSING AND ARTIFICIAL INTELLIGENCE: TECHNOLOGIES AND APPLICATIONS II, 2021, 11785
  • [30] Multi-body ICP: Motion Segmentation of Rigid Objects on Dense Point Clouds
    Kim, Youngji
    Lim, Hwasup
    Ahn, Sang Chul
    [J]. 2015 12TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS AND AMBIENT INTELLIGENCE (URAI), 2015, : 532 - 536