Top-down Gamma Saliency - Learning to Search for Objects in Complex Scenes

被引:0
|
作者
Burt, Ryan [1 ]
Principe, Jose C. [1 ]
机构
[1] Univ Florida, Dept Elect & Comp Engn, Computat NeuroEngn Lab, Gainesville, FL 32601 USA
关键词
Top-down Saliency; Deep Learning; Image Processing; ATTENTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Saliency measures are often used to predict fixation location in images. However, a pure bottom up saliency is not useful for visual search in a complex scene with many objects since it is only driven by the input image. Alternatively, neural networks can localize objects within scenes, but rely on a brute force classification of heuristic bounding boxes. We propose a top-down attention mechanism that combines the traditional saliency measures with the learned ability of neural networks to distinguish between objects. To do this, we will use a set of feature maps produced by the convolutional layers of a trained classification network as the inputs to our saliency measure instead of a traditional RBG or LAB image. On top of these feature maps, we can learn a set of weights to bias the saliency towards specific objects. We test this top-down approach against the traditional bottom-up approach in a synthetic environment where it proves to be more adept at finding specific objects quickly in crowded scenes.
引用
收藏
页数:4
相关论文
共 50 条
  • [1] A model of top-down attentional control during visual search in complex scenes
    Hwang, Alex D.
    Higgins, Emily C.
    Pomplun, Marc
    [J]. JOURNAL OF VISION, 2009, 9 (05):
  • [2] Top-Down Effects on Categorizing Incomplete Complex Scenes
    Hazan, Beliz
    Kurylo, Daniel D.
    Baran, Zeynel
    Bowens, Naomi
    [J]. INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2014, 55 (13)
  • [3] Top-Down Visual Saliency via Joint CRF and Dictionary Learning
    Yang, Jimei
    Yang, Ming-Hsuan
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (03) : 576 - 588
  • [4] Top-Down Visual Saliency via Joint CRF and Dictionary Learning
    Yang, Jimei
    Yang, Ming-Hsuan
    [J]. 2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2012, : 2296 - 2303
  • [5] A top-down saliency model with goal relevance
    Tanner, James
    Itti, Laurent
    [J]. JOURNAL OF VISION, 2019, 19 (01): : 1 - 16
  • [6] Top-down Visual Saliency Guided by Captions
    Ramanishka, Vasili
    Das, Abir
    Zhang, Jianming
    Saenko, Kate
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3135 - 3144
  • [7] The Role of Top-Down Task Context in Learning to Perceive Objects
    Song, Yiying
    Hu, Siyuan
    Li, Xueting
    Li, Wu
    Liu, Jia
    [J]. JOURNAL OF NEUROSCIENCE, 2010, 30 (29): : 9869 - 9876
  • [8] TOP-DOWN MODULATION IN THE CATEGORIZATION OF NATURAL SCENES
    De Cesarei, Andrea
    Cavicchi, Shari
    Micucci, Antonia
    Codispoti, Maurizio
    [J]. PSYCHOPHYSIOLOGY, 2017, 54 : S146 - S146
  • [9] Self-related objects increase alertness and orient attention through top-down saliency
    Li, Biqin
    Hu, Wenyan
    Hunt, Amelia
    Sui, Jie
    [J]. ATTENTION PERCEPTION & PSYCHOPHYSICS, 2022, 84 (02) : 408 - 417
  • [10] Self-related objects increase alertness and orient attention through top-down saliency
    Biqin Li
    Wenyan Hu
    Amelia Hunt
    Jie Sui
    [J]. Attention, Perception, & Psychophysics, 2022, 84 : 408 - 417