Curiosity-Driven Salient Object Detection With Fragment Attention

被引:2
|
作者
Wang, Zheng [1 ]
Wang, Pengzhi [1 ]
Han, Yahong [1 ]
Zhang, Xue [1 ]
Sun, Meijun [1 ]
Tian, Qi [2 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300350, Peoples R China
[2] Huawei Cloud & AI, Shenzhen 518129, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Object detection; Visualization; Semantics; Task analysis; Saliency detection; Cognition; Salient object detection; fragment attention mechanism; curiosity-driven learning; MODEL;
D O I
10.1109/TIP.2022.3203605
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent deep learning based salient object detection methods with attention mechanisms have made great success. However, existing attention mechanisms can be generally separated into two categories. One part chooses to calculate weights indiscriminately, which yields computational redundancy. While one part focuses randomly on a small part of the images, such as hard attention, resulting in incorrectness owing to insufficiently targeted selection of a subset of tokens. To alleviate these problems, we design a Curiosity-driven Network (CNet) and a Curiosity-driven Learning Algorithm (CLA) based on fragment attention (FA) mechanism newly defined in this paper. FA imitates the process of cognition perception driven by human curiosity, and divides the degree of curiosity into three levels, i.e. curious, a little curious and not curious. These three levels correspond to five saliency degrees, including salient and non-salient, likewise salient and likewise non-salient, completely uncertain. With more knowledge gained by the network, CLA transforms the curiosity degree of each pixel to yield enhanced detail-enriched saliency maps. In order to extract more context-aware information of potential salient objects and make a better foundation for CLA, a high-level feature extraction module (HFEM) is further proposed. Based on the much better high-level features extracted by HFEM, FA can classify the curiosity degree for each pixel more reasonably and accurately. Extensive experiments on five popular datasets clearly demonstrate that our method outperforms the state-of-the-art approaches without any pre-processing operations or post-processing operations.
引用
收藏
页码:5989 / 6001
页数:13
相关论文
共 50 条
  • [1] Curiosity-driven 3D Object Detection Without Labels
    Griffiths, David
    Boehm, Jan
    Ritschel, Tobias
    [J]. 2021 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2021), 2021, : 525 - 534
  • [2] CURIOSITY-DRIVEN RESEARCH
    HUGHBANKS, T
    [J]. CHEMICAL & ENGINEERING NEWS, 1995, 73 (49) : 5 - 5
  • [3] Curiosity-Driven Optimization
    Schaul, Tom
    Sun, Yi
    Wierstra, Daan
    Gomez, Fausino
    Schmidhuber, Juergen
    [J]. 2011 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2011, : 1343 - 1349
  • [4] Flow driven attention network for video salient object detection
    Zhou, Feng
    Shuai, Hui
    Liu, Qingshan
    Guo, Guodong
    [J]. IET IMAGE PROCESSING, 2020, 14 (06) : 997 - 1004
  • [5] Curiosity-driven method development
    Kaitlin McCardle
    [J]. Nature Computational Science, 2022, 2 : 542 - 544
  • [6] Curiosity-driven method development
    McCardle, Kaitlin
    Head-Gordon, Martin
    [J]. NATURE COMPUTATIONAL SCIENCE, 2022, 2 (09): : 542 - 544
  • [7] Encouraging Curiosity-Driven Science
    不详
    [J]. CHEMICAL ENGINEERING PROGRESS, 2014, 110 (06) : 20 - 21
  • [8] ATTENTION-BASED CURIOSITY-DRIVEN EXPLORATION IN DEEP REINFORCEMENT LEARNING
    Reizinger, Patrik
    Szemenyei, Marton
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3542 - 3546
  • [9] Curiosity-driven phonetic learning
    Moulin-Frier, Clement
    Oudeyer, Pierre-Yves
    [J]. 2012 IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND EPIGENETIC ROBOTICS (ICDL), 2012,
  • [10] Curiosity-Driven Attention for Anomaly Road Obstacles Segmentation in Autonomous Driving
    Ren, Xiangxuan
    Li, Min
    Li, Zhenhua
    Wu, Wentao
    Bai, Lin
    Zhang, Weidong
    [J]. IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (03): : 2233 - 2243