A role of multi-modal rhythms in physical interaction and cooperation

被引:0
|
作者
Kenta Yonekura
Chyon Hae Kim
Kazuhiro Nakadai
Hiroshi Tsujino
Shigeki Sugano
机构
[1] Tsukuba Univ.,The Dept. of Intelligent Interaction Technologies
[2] Honda Research Institute Japan Co.,School of Creative Science and Engineering
[3] Ltd.,undefined
[4] Waseda Univ.,undefined
关键词
Test Subject; Motion Capture System; Practice Phase; Remote Controller; Pitch Direction;
D O I
暂无
中图分类号
学科分类号
摘要
As fundamental research for human-robot interaction, this paper addresses the rhythmic reference of a human while turning a rope with another human. We hypothyzed that when interpreting rhythm cues to make a rhythm reference, humans will use auditory and force rhythms more than visual ones. We examined 21-23 years old test subjects. We masked perception of each test subject using 3 kinds of masks, an eye-mask, headphones, and a force mask. The force mask is composed of a robot arm and a remote controller. These instruments allow a test subject to turn a rope without feeling force from the rope. In the first experiment, each test subject interacted with an operator that turned a rope with a constant rhythm. 8 experiments were conducted for each test subject that wore combinations of masks. We measured the angular velocity of force between a test subject/the operator and a rope. We calculated error between the angular velocities of the force directions, and validated the error. In the second experiment, two test subjects interacted with each other. 1.6 - 2.4 Hz auditory rhythm was presented from headphones so as to inform target turning frequency. Addition to the auditory rhythm, the test subjects wore eye-masks. The first experiment showed that visual rhythm has little influence on rope-turning cooperation between humans. The second experiment provided firmer evidence for the same hypothesis because humans neglected their visual rhythms.
引用
收藏
相关论文
共 50 条
  • [31] Multi-Modal Interaction for Space Telescience of Fluid Experiments
    Yu, Ge
    Liang, Ji
    Guo, Lili
    AIVR 2018: 2018 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND VIRTUAL REALITY, 2018, : 31 - 37
  • [32] The role of multi-modal sensory stimuli and their interaction in oviposition learning and memory in Drosophila melanogster
    Howcroft-Ferreira, Clara
    Mery, Frederic
    JOURNAL OF NEUROGENETICS, 2010, 24 : 28 - 29
  • [33] Multi-modal human robot interaction for map generation
    Ghidary, SS
    Nakata, Y
    Saito, H
    Hattori, M
    Takamori, T
    IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4: EXPANDING THE SOCIETAL ROLE OF ROBOTICS IN THE NEXT MILLENNIUM, 2001, : 2246 - 2251
  • [34] A multi-modal approach to selective interaction in assistive domains
    Feil-Seifer, D
    Mataric, MJ
    2005 IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2005, : 416 - 421
  • [35] Model Predictive Control with Gaussian Processes for Flexible Multi-Modal Physical Human Robot Interaction
    Haninger, Kevin
    Hegeler, Christian
    Peternel, Luka
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 6948 - 6955
  • [36] Wearable Multi-modal Interface for Human Multi-robot Interaction
    Gromov, Boris
    Gambardella, Luca M.
    Di Caro, Gianni A.
    2016 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY, AND RESCUE ROBOTICS (SSRR), 2016, : 240 - 245
  • [37] Multi-level Interaction Network for Multi-Modal Rumor Detection
    Zou, Ting
    Qian, Zhong
    Li, Peifeng
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [38] Contextual and Cross-Modal Interaction for Multi-Modal Speech Emotion Recognition
    Yang, Dingkang
    Huang, Shuai
    Liu, Yang
    Zhang, Lihua
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2093 - 2097
  • [39] Flexible Dual Multi-Modal Hashing for Incomplete Multi-Modal Retrieval
    Wei, Yuhong
    An, Junfeng
    INTERNATIONAL JOURNAL OF IMAGE AND GRAPHICS, 2024,
  • [40] Multi-Modal 2020: Multi-Modal Argumentation 30 Years Later
    Gilbert, Michael A.
    INFORMAL LOGIC, 2022, 42 (03): : 487 - 506