Estimating heading from optic flow: Comparing deep learning network and human performance

被引:2
|
作者
Maus N. [1 ]
Layton O.W. [2 ]
机构
[1] Department of Computer Science, University of Pennsylvania, Philadelphia, 19104, PA
[2] Department of Computer Science, Colby College, Waterville, 04901, ME
关键词
Deep learning; Heading; Optic flow; Self-motion; Vision;
D O I
10.1016/j.neunet.2022.07.007
中图分类号
学科分类号
摘要
Convolutional neural networks (CNNs) have made significant advances over the past decade with visual recognition, matching or exceeding human performance on certain tasks. Visual recognition is subserved by the ventral stream of the visual system, which, remarkably, CNNs also effectively model. Inspired by this connection, we investigated the extent to which CNNs account for human heading perception, an important function of the complementary dorsal stream. Heading refers to the direction of movement during self-motion, which humans judge with high degrees of accuracy from the streaming pattern of motion on the eye known as optic flow. We examined the accuracy with which CNNs estimate heading from optic flow in a range of situations in which human heading perception has been well studied. These scenarios include heading estimation from sparse optic flow, in the presence of moving objects, and in the presence of rotation. We assessed performance under controlled conditions wherein self-motion was simulated through minimal or realistic scenes. We found that the CNN did not capture the accuracy of heading perception. The addition of recurrent processing to the network, however, closed the gap in performance with humans substantially in many situations. Our work highlights important self-motion scenarios in which recurrent processing supports heading estimation that approaches human-like accuracy. © 2022 The Author(s)
引用
收藏
页码:383 / 396
页数:13
相关论文
共 50 条
  • [31] Attractive serial dependence in heading perception from optic flow occurs at the perceptual and postperceptual stages
    Xu, Ling-Hao
    Sun, Qi
    Zhang, Baoyuan
    Li, Xinyu
    JOURNAL OF VISION, 2022, 22 (12):
  • [32] Comparing the performance of machine learning and deep learning algorithms classifying messages in Facebook learning group
    Huang-Fu, Cheng-Yo
    Liao, Chen-Hsuan
    Wu, Jiun-Yu
    IEEE 21ST INTERNATIONAL CONFERENCE ON ADVANCED LEARNING TECHNOLOGIES (ICALT 2021), 2021, : 347 - 349
  • [33] COMPARING HUMAN AND NEURAL-NETWORK LEARNING OF CLIMATE CATEGORIES
    LLOYD, R
    CARBONE, G
    PROFESSIONAL GEOGRAPHER, 1995, 47 (03): : 237 - 250
  • [34] Estimating heading and collisions with the environment from curvilinear self-motion in optical flow patterns
    Bayerl, P.
    Neumann, H.
    PERCEPTION, 2006, 35 : 145 - 145
  • [35] Comparing Learning From Observing and From Human Tutoring
    Muldner, Kasia
    Lam, Rachel
    Chi, Michelene T. H.
    JOURNAL OF EDUCATIONAL PSYCHOLOGY, 2014, 106 (01) : 69 - 85
  • [36] Comparing the Performance of Deep Learning Methods to Predict Companies' Financial Failure
    Aljawazneh, H.
    Mora, A. M.
    Garcia-Sanchez, P.
    Castillo-Valdivieso, P. A.
    IEEE ACCESS, 2021, 9 : 97010 - 97038
  • [37] The Performance of Proposed Deep Residual Learning Network of Images
    Luo, Xingcheng
    Shen, Ruihan
    Hu, Jian
    Zhou, Qunfang
    Hu, Linji
    Guan, Qing
    Deng, Jianhua
    2017 IEEE 2ND INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING (ICSIP), 2017, : 82 - 85
  • [38] Comparing Human and Algorithm Performance on Estimating Word-Based Semantic Similarity
    Batram, Nils
    Krause, Markus
    Dehaye, Paul-Olivier
    SOCIAL INFORMATICS, 2015, 8852 : 452 - 460
  • [39] Comparing Human Pose Estimation through deep learning approaches: An overview
    Dibenedetto, Gaetano
    Sotiropoulos, Stefanos
    Polignano, Marco
    Cavallo, Giuseppe
    Lops, Pasquale
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2025, 252
  • [40] Human Interaction Recognition through Deep Learning Network
    Berlin, S. Jeba
    John, Mala
    2016 IEEE INTERNATIONAL CARNAHAN CONFERENCE ON SECURITY TECHNOLOGY (ICCST), 2016, : 143 - 146