Attentive object detection using an information theoretic saliency measure

被引:0
|
作者
Fritz, G
Seifert, C
Paletta, L
Bischof, H
机构
[1] JOANNEUM Res Forschungsgesellsch MBH, Inst Digital Image Proc, A-8010 Graz, Austria
[2] Graz Univ Technol, Inst Comp Graph & Vis, A-8010 Graz, Austria
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A major goal of selective attention is to focus processing on relevant information to enable rapid and robust task performance. For the example of attentive visual object recognition, we investigate here the impact of top-down information on multi-stage processing, instead of integrating generic visual feature extraction into object specific interpretation. We discriminate between generic and specific task based filters that select task relevant information of different scope and specificity within a processing chain. Attention is applied by tuned early features to selectively respond to generic task related visual features, i.e., to information that is in general locally relevant for any kind of object search. The mapping from appearances to discriminative regions is then modeled using decision trees to accelerate. processing. The focus of attention on discriminative patterns enables efficient recognition of specific objects, by means of a sparse object representation that enables selective, task relevant, and rapid object specific responses. In the experiments the performance in object recognition from single appearance patterns dramatically increased considering only discriminative patterns, and evaluation of complete image analysis under various degrees of partial occlusion and image noise resulted in highly robust recognition, even in the presence of severe occlusion and noise effects. In addition, we present performance evaluation on our public available reference object database (TSG-20).
引用
收藏
页码:29 / 41
页数:13
相关论文
共 50 条
  • [41] Information-theoretic model comparison unifies saliency metrics
    Kuemmerer, Matthias
    Wallis, Thomas S. A.
    Bethge, Matthias
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2015, 112 (52) : 16054 - 16059
  • [42] Information Theoretic Preattentive Saliency: A Closed-form Solution
    Loog, Marco
    [J]. 2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORKSHOPS), 2011,
  • [43] Information-Theoretic CAD System in Mammography: Improved Mass Detection by Incorporating a Gaussian Saliency Map
    Tourassi, Georgia D.
    Harrawood, Brian P.
    [J]. MEDICAL IMAGING 2009: COMPUTER-AIDED DIAGNOSIS, 2009, 7260
  • [44] Skeleton to Abstraction: An Attentive Information Extraction Schema for Enhancing the Saliency of Text Summarization
    Xiang, Xiujuan
    Xu, Guangluan
    Fu, Xingyu
    Wei, Yang
    Jin, Li
    Wang, Lei
    [J]. INFORMATION, 2018, 9 (09)
  • [45] Salient object detection using the phase information and object model
    Afsharirad, Hooman
    Seyedin, Seyed Alireza
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (14) : 19061 - 19080
  • [46] FEATURE SELECTION BASED SALIENCY OBJECT DETECTION
    Huang, Rui
    Feng, Wei
    Sun, Jizhou
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 2014,
  • [47] Background Priors based Saliency Object Detection
    Liu, Zexia
    Gu, Guanghua
    Chen, Chunxia
    Cui, Dong
    Lin, Chunyu
    [J]. 2016 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA), 2016,
  • [48] An Improved SSD Model for Saliency Object Detection
    Yu Chunyan
    Xu Xiaodan
    Zhong Shijun
    [J]. JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2018, 40 (11) : 2554 - 2561
  • [49] Saliency object detection: integrating reconstruction and prior
    Cuiping Li
    Zhenxue Chen
    Q. M. Jonathan Wu
    Chengyun Liu
    [J]. Machine Vision and Applications, 2019, 30 : 397 - 406
  • [50] Saliency Density Maximization for Object Detection and Localization
    Luo, Ye
    Yuan, Junsong
    Xue, Ping
    Tian, Qi
    [J]. COMPUTER VISION - ACCV 2010, PT III, 2011, 6494 : 396 - +