Attention-based active visual search for mobile robots

被引:14
|
作者
Rasouli, Amir [1 ,2 ]
Lanillos, Pablo [3 ]
Cheng, Gordon [3 ]
Tsotsos, John K. [1 ,2 ]
机构
[1] York Univ, Dept Elect Engn & Comp Sci, Toronto, ON, Canada
[2] York Univ, Ctr Vis Res, Toronto, ON, Canada
[3] Tech Univ Munich, ICS, Arcisstr 21, D-80333 Munich, Germany
基金
欧盟地平线“2020”; 加拿大自然科学与工程研究理事会;
关键词
Active visual search; Visual attention; Probabilistic lost target search; Top-down modulation; Search and rescue; TOP-DOWN; OBJECT; COMPLEXITY; MODEL; TARGET; ENVIRONMENTS; FRAMEWORK; VISION; PATH;
D O I
10.1007/s10514-019-09882-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present an active visual search model for finding objects in unknown environments. The proposed algorithm guides the robot towards the sought object using the relevant stimuli provided by the visual sensors. Existing search strategies are either purely reactive or use simplified sensor models that do not exploit all the visual information available. In this paper, we propose a new model that actively extracts visual information via visual attention techniques and, in conjunction with a non-myopic decision-making algorithm, leads the robot to search more relevant areas of the environment. The attention module couples both top-down and bottom-up attention models enabling the robot to search regions with higher importance first. The proposed algorithm is evaluated on a mobile robot platform in a 3D simulated environment. The results indicate that the use of visual attention significantly improves search, but the degree of improvement depends on the nature of the task and the complexity of the environment. In our experiments, we found that performance enhancements of up to 42% in structured and 38% in highly unstructured cluttered environments can be achieved using visual attention mechanisms.
引用
收藏
页码:131 / 146
页数:16
相关论文
共 50 条
  • [1] Attention-based active visual search for mobile robots
    Amir Rasouli
    Pablo Lanillos
    Gordon Cheng
    John K. Tsotsos
    [J]. Autonomous Robots, 2020, 44 : 131 - 146
  • [2] Attention-based navigation in mobile robots using a reconfigurable sensor
    Maris, M
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2001, 34 (01) : 53 - 63
  • [3] Visual Attention-based Watermarking
    Oakes, Matthew
    Bhowmik, Deepayan
    Abhayaratne, Charith
    [J]. 2011 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2011, : 2653 - 2656
  • [4] Attention-based visual processes
    Cavanagh, P
    [J]. CANADIAN PSYCHOLOGY-PSYCHOLOGIE CANADIENNE, 1996, 37 (01): : 59 - 59
  • [5] Attention-based visual processes
    Cavanagh, P
    [J]. INTERNATIONAL JOURNAL OF PSYCHOLOGY, 1996, 31 (3-4) : 3363 - 3363
  • [6] Color segmentation for visual attention of mobile robots
    Aziz, MZ
    Shafik, MS
    Mertsching, B
    Munir, A
    [J]. IEEE: 2005 INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES, PROCEEDINGS, 2005, : 115 - 120
  • [7] CONTROL OF VISUAL-ATTENTION IN MOBILE ROBOTS
    CLARK, JJ
    FERRIER, NJ
    [J]. PROCEEDINGS - 1989 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOL 1-3, 1989, : 826 - 831
  • [8] Attention-based visual routines: sprites
    Cavanagh, P
    Labianca, AT
    Thornton, IM
    [J]. COGNITION, 2001, 80 (1-2) : 47 - 60
  • [9] Dynamic Attention-based Visual Odometry
    Kuo, Xin-Yu
    Liu, Chien
    Lin, Kai-Chen
    Luo, Evan
    Chen, Yu-Wen
    Lee, Chun-Yi
    [J]. 2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 5753 - 5760
  • [10] Dynamic Attention-based Visual Odometry
    Kuo, Xin-Yu
    Liu, Chien
    Lin, Kai-Chen
    Lee, Chun-Yi
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 160 - 169