HUMAN-ROBOT INTERFACE WITH ATTENTION

被引:0
|
作者
IMAI, M [1 ]
ANZAI, Y [1 ]
HIRAKI, K [1 ]
机构
[1] ELECTROTECH LAB, TSUKUBA, IBARAKI 305, JAPAN
关键词
ATTENTION MECHANISM; UTTERANCE GENERATION; MULTIMODAL INTERFACE; HUMAN-ROBOT INTERACTION;
D O I
10.1002/scj.4690261209
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This paper describes the utterance generation system Linta-II, which was developed aiming at the realization of the flexible human-robot interface. Linta-II is implemented on an autonomous mobile robot. Based on the information from the sensor installed on the robot, the utterance can be generated according to the situation. In Linta-II, the sensor information is acquired effectively using the attention mechanism. The attention mechanism is constructed as a set of entities executing simple symbol processings. It realizes two kinds of attentions, i.e., the involuntary and voluntary attentions. The involuntary attention has the function of paying immediate attention to the external event. The voluntary attention is activated by other modules such as the action control unit, and executes the top-down processing. Linta-II utilizes those two kinds of attention mechanisms, and can generate the utterance according to the situation, even if the rule to determine the sensor information to be referred to cannot completely be described. This paper demonstrates the effectiveness of the attention mechanism by presenting several examples of utterances by Linta-II.
引用
收藏
页码:83 / 95
页数:13
相关论文
共 50 条
  • [1] A human-robot interface for mobile manipulator
    Chen, Mingxuan
    Liu, Caibing
    Du, Guanglong
    INTELLIGENT SERVICE ROBOTICS, 2018, 11 (03) : 269 - 278
  • [2] Vision System for Human-Robot Interface
    Islam, Md Ezharul
    Begum, Nasima
    Bhuiyan, Md. Al-Amin
    2008 11TH INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION TECHNOLOGY: ICCIT 2008, VOLS 1 AND 2, 2008, : 617 - 621
  • [3] Building a multimodal human-robot interface
    Perzanowski, D
    Schultz, AC
    Adams, W
    Marsh, E
    Bugajska, M
    IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS, 2001, 16 (01): : 16 - 21
  • [4] A human-robot interface based on electrooculography
    Chen, YX
    Newman, WS
    2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, : 243 - 248
  • [5] On tracking of eye for human-robot interface
    Bhuiyan, MA
    Ampornaramveth, V
    Muto, S
    Ueno, H
    INTERNATIONAL JOURNAL OF ROBOTICS & AUTOMATION, 2004, 19 (01): : 42 - 54
  • [6] Multimodal Interface for Human-Robot Collaboration
    Rautiainen, Samu
    Pantano, Matteo
    Traganos, Konstantinos
    Ahmadi, Seyedamir
    Saenz, Jose
    Mohammed, Wael M.
    Lastra, Jose L. Martinez
    MACHINES, 2022, 10 (10)
  • [7] On tracking of eye for human-robot interface
    Bhuiyan, M.A.
    Ampornaramveth, Vuthichai
    Muto, S.
    Ueno, Haruki
    International Journal of Robotics and Automation, 2004, 19 (01): : 42 - 54
  • [8] Evaluation of an enhanced human-robot interface
    Johnson, CA
    Adams, JA
    Kawamura, K
    2003 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOLS 1-5, CONFERENCE PROCEEDINGS, 2003, : 900 - 905
  • [9] Using Human Attention to Address Human-Robot Motion
    Paulin, Remi
    Fraichard, Thierry
    Reignier, Patrick
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (02): : 2038 - 2045
  • [10] Adaptive Attention Allocation in Human-Robot Systems
    Srivastava, Vaibhav
    Surana, Amit
    Bullo, Francesco
    2012 AMERICAN CONTROL CONFERENCE (ACC), 2012, : 2767 - 2774