Prosody-Driven Robot Arm Gestures Generation in Human-Robot Interaction

被引:0
|
作者
Aly, Amir [1 ]
Tapus, Adriana [1 ]
机构
[1] ENSTA ParisTech, Cognit Robot Lab UEI, 32 Blvd Victor, F-75015 Paris, France
关键词
HRI; CHMM; non-verbal and para-verbal mapping;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In multimodal human-robot interaction(HRI), the process of communication can be established through verbal, nonverbal, and/or para-verbal cues. The linguistic literature [3] shows that para-verbal and non-verbal communications are naturally synchronized. This research focuses on the relation between non-verbal and para-verbal communication by mapping prosody cues to the corresponding arm gestures. Our approach for synthesizing arm gestures uses the coupled hidden Markov models (CHMMs), which could be seen as a collection of HMMs modeling the segmented prosodic characteristics' stream and the segmented rotation characteristics' streams of the two arms' articulations [4] [1]. Nao robot was used for tests.
引用
收藏
页码:257 / 258
页数:2
相关论文
共 50 条
  • [1] Pantomimic Gestures for Human-Robot Interaction
    Burke, Michael
    Lasenby, Joan
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2015, 31 (05) : 1225 - 1237
  • [2] Conversational Gestures in Human-Robot Interaction
    Bremner, Paul
    Pipe, Anthony
    Melhuish, Chris
    Fraser, Mike
    Subramanian, Sriram
    [J]. 2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2009), VOLS 1-9, 2009, : 1645 - +
  • [3] Pointing Gestures for Human-Robot Interaction with the Humanoid Robot Digit
    Lorentz, Viktor
    Weiss, Manuel
    Hildebrand, Kristian
    Boblan, Ivo
    [J]. 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, : 1886 - 1892
  • [4] Human-Robot Interaction Using Pointing Gestures
    Tolgyessy, Michal
    Dekan, Martin
    Hubinsky, Peter
    [J]. ISCSIC'18: PROCEEDINGS OF THE 2ND INTERNATIONAL SYMPOSIUM ON COMPUTER SCIENCE AND INTELLIGENT CONTROL, 2018,
  • [5] Integration of Gestures and Speech in Human-Robot Interaction
    Meena, Raveesh
    Jokinen, Kristiina
    Wilcock, Graham
    [J]. 3RD IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFOCOMMUNICATIONS (COGINFOCOM 2012), 2012, : 673 - 678
  • [6] Incremental learning of gestures for human-robot interaction
    Okada, Shogo
    Kobayashi, Yoichi
    Ishibashi, Satoshi
    Nishida, Toyoaki
    [J]. AI & SOCIETY, 2010, 25 (02) : 155 - 168
  • [7] Multimodal analysis of speech and arm motion for prosody-driven synthesis of beat gestures
    Bozkurt, Elif
    Yemez, Yucel
    Erzin, Engin
    [J]. SPEECH COMMUNICATION, 2016, 85 : 29 - 42
  • [8] Prosody-Based Adaptive Metaphoric Head and Arm Gestures Synthesis in Human Robot Interaction
    Aly, Amir
    Tapus, Adriana
    [J]. 2013 16TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS (ICAR), 2013,
  • [9] Modular Robot Arm Design for Physical Human-Robot Interaction
    Tremblay, Ty
    Padir, Taskin
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2013), 2013, : 4482 - 4487
  • [10] Recognizing Touch Gestures for Social Human-Robot Interaction
    Altuglu, Tugce Balli
    Altun, Kerem
    [J]. ICMI'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2015, : 407 - 413