An adaptable architecture for human-robot visual interaction

被引:1
|
作者
Anisetti, Marco [1 ]
Bellandi, Valerio [1 ]
Damiani, Ernesto [1 ]
Jeon, Gwanggil [2 ]
Jeong, Jechang [2 ]
机构
[1] Univ Milan, Dept Informat Technol, I-20122 Milan, Italy
[2] Hanyang Univ, Dept Elect & Comp Engn, Seoul, South Korea
关键词
D O I
10.1109/IECON.2007.4460411
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Face recognition has received increasing attention during the past decade as one of the most promising applications of image analysis and processing. One emerging application field is Human-Machine Interaction involving robotic vision. For many applications in this field (including face identification and expression recognition) the precision of facial feature detection and the computational burden are both critical issues. This paper presents a completely tunable hybrid method for accurate face localization based on a quick-and-dirty preliminary detection followed by a 2D tracking. Our technique guarantees complete control over the performance/result: quality ratio and can be successfully applied to intelligent robotic vision. We use our approach to design a Robotic Vision Architecture capable of selecting from a set of strategies to obtain the best results.
引用
收藏
页码:119 / +
页数:2
相关论文
共 50 条
  • [1] A Socially Adaptable Framework for Human-Robot Interaction
    Tanevska, Ana
    Rea, Francesco
    Sandini, Giulio
    Canamero, Lola
    Sciutti, Alessandra
    FRONTIERS IN ROBOTICS AND AI, 2020, 7
  • [2] Visual human-robot interaction
    Heinzmann, J
    Zelinsky, A
    2001 INTERNATIONAL WORKSHOP ON BIO-ROBOTICS AND TELEOPERATION, PROCEEDINGS, 2001, : 113 - 118
  • [3] Visual Surveillance for Human-Robot Interaction
    Martinez-Martin, Ester
    del Pobil, Angel P.
    PROCEEDINGS 2012 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2012, : 3333 - 3338
  • [4] Beyond the Default: The Efects of Adaptable Robot Speed in Industrial Human-Robot Interaction
    Roesler, Eileen
    Meerwein, Jule
    Krueger, Joerg
    Onnasch, Linda
    COMPANION OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024 COMPANION, 2024, : 896 - 900
  • [5] Intelligent Control Architecture for Human-Robot Interaction
    Alves, Silas F. R.
    Silva, Ivan N.
    Ferasoli Filho, Humberto
    2014 2ND BRAZILIAN ROBOTICS SYMPOSIUM (SBR) / 11TH LATIN AMERICAN ROBOTICS SYMPOSIUM (LARS) / 6TH ROBOCONTROL WORKSHOP ON APPLIED ROBOTICS AND AUTOMATION, 2014, : 259 - 264
  • [6] A Flexible and Scalable Architecture for Human-Robot Interaction
    Recupero, Diego Reforgiato
    Dessi, Danilo
    Concas, Emanuele
    AMBIENT INTELLIGENCE (AMI 2019), 2019, 11912 : 311 - 317
  • [7] Towards Visual Dialogue for Human-Robot Interaction
    Part, Jose L.
    Garcia, Daniel Hernandez
    Yu, Yanchao
    Gunson, Nancie
    Dondrup, Christian
    Lemon, Oliver
    HRI '21: COMPANION OF THE 2021 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2021, : 670 - 672
  • [8] Visual tracking of silhouettes for human-robot interaction
    Menezes, P
    Brèthes, L
    Lerasle, F
    Danès, P
    Dias, J
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS 2003, VOL 1-3, 2003, : 971 - 976
  • [9] Fuzzy visual detection for human-robot interaction
    Shieh, Ming-Yuan
    Hsieh, Chung-Yu
    Hsieh, Tsung-Min
    ENGINEERING COMPUTATIONS, 2014, 31 (08) : 1709 - 1719
  • [10] Emotionally Driven Robot Control Architecture for Human-Robot Interaction
    Novikova, Jekaterina
    Gaudl, Swen
    Bryson, Joanna
    TOWARDS AUTONOMOUS ROBOTIC SYSTEMS, 2014, 8069 : 261 - 263