A neural model of the temporal dynamics of figure-ground segregation in motion perception

被引:30
|
作者
Raudies, Florian [1 ]
Neumann, Heiko [1 ]
机构
[1] Univ Ulm, Inst Neural Informat Proc, Fac Engn & Comp Sci, D-89069 Ulm, Germany
关键词
Motion perception; Figure-ground segregation; Feedback; Re-entrant processing; Primary visual cortex; Area V1; Area MT; Attention; Decision-making; VISUAL-CORTEX; ATTENTION; MECHANISMS; ARCHITECTURE; INTEGRATION; MODULATION; VISION; BRAIN; VIEW; FORM;
D O I
10.1016/j.neunet.2009.10.005
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
How does the visual system manage to segment a visual scene into surfaces and objects and manage to attend to a target object? Based on psychological and physiological investigations, it has been proposed that the perceptual organization and segmentation of a scene is achieved by the processing at different levels of the visual cortical hierarchy. According to this, motion onset detection, motion-defined shape segregation, and target selection are accomplished by processes which bind together simple features into fragments of increasingly complex configurations at different levels in the processing hierarchy. As an alternative to this hierarchical processing hypothesis, it has been proposed that the processing stages for feature detection and segregation are reflected in different temporal episodes in the response patterns of individual neurons. Such temporal epochs have been observed in the activation pattern of neurons as low as in area V1. Here, we present a neural network model of motion detection, figure-ground segregation and attentive selection which explains these response patterns in an unifying framework. Based on known principles of functional architecture of the visual cortex, we propose that initial motion and motion boundaries are detected at different and hierarchically organized stages in the dorsal pathway. Visual shapes that are defined by boundaries, which were generated from juxtaposed opponent motions, are represented at different stages in the ventral pathway. Model areas in the different pathways interact through feedforward and modulating feedback, while mutual interactions enable the communication between motion and form representations. Selective attention is devoted to shape representations by sending modulating feedback signals from higher levels (working memory) to intermediate levels to enhance their responses. Areas in the motion and form pathway are coupled through top-down feedback with V I cells at the bottom end of the hierarchy. We propose that the different temporal episodes in the response pattern of V I cells, as recorded in recent experiments, reflect the strength of modulating feedback signals. This feedback results from the consolidated shape representations from coherent motion patterns and the attentive modulation of responses along the cortical hierarchy. The model makes testable predictions concerning the duration and delay of the temporal episodes of V1 cell responses as well as their response variations that were caused by modulating feedback signals. (C) 2009 Elsevier Ltd. All rights reserved.
引用
收藏
页码:160 / 176
页数:17
相关论文
共 50 条
  • [1] Temporal dynamics of figure-ground segregation in human vision
    Neri, Peter
    Levi, Dennis M.
    [J]. JOURNAL OF NEUROPHYSIOLOGY, 2007, 97 (01) : 951 - 957
  • [2] Purely temporal figure-ground segregation
    Kandil, FI
    Fahle, M
    [J]. EUROPEAN JOURNAL OF NEUROSCIENCE, 2001, 13 (10) : 2004 - 2008
  • [3] Neural dynamics of feedforward and feedback processing in figure-ground segregation
    Layton, Oliver W.
    Mingolla, Ennio
    Yazdanbakhsh, Arash
    [J]. FRONTIERS IN PSYCHOLOGY, 2014, 5
  • [4] Correlation between neural responses and human perception in figure-ground segregation
    Shishikura, Motofumi
    Tamura, Hiroshi
    Sakai, Ko
    [J]. FRONTIERS IN SYSTEMS NEUROSCIENCE, 2023, 16
  • [5] When figure-ground segregation fails: Exploring antagonistic interactions in figure-ground perception
    Brown, James M.
    Plummer, Richard W.
    [J]. ATTENTION PERCEPTION & PSYCHOPHYSICS, 2020, 82 (07) : 3618 - 3635
  • [6] A neuromorphic recurrent model for figure-ground segregation of coherent motion
    Beaudot, W. H. A.
    [J]. PERCEPTION, 1997, 26 : 17 - 18
  • [7] Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence
    Teki, Sundeep
    Barascud, Nicolas
    Picard, Samuel
    Payne, Christopher
    Griffiths, Timothy D.
    Chait, Maria
    [J]. CEREBRAL CORTEX, 2016, 26 (09) : 3669 - 3680
  • [8] Mechanisms of purely temporal figure-ground segregation
    Kandil, FI
    Fahle, M
    [J]. PERCEPTION, 2002, 31 : 72 - 73
  • [9] FIGURE-GROUND SEGREGATION MODULATES APPARENT MOTION
    RAMACHANDRAN, VS
    ANSTIS, S
    [J]. VISION RESEARCH, 1986, 26 (12) : 1969 - 1975
  • [10] A neural model of visual figure-ground segregation from kinetic occlusion
    Barnes, Timothy
    Mingolla, Ennio
    [J]. NEURAL NETWORKS, 2013, 37 : 141 - 162