Development and Testing of a Multimodal Acquisition Platform for Human-Robot Interaction Affective Studies

被引:13
|
作者
Lazzeri, Nicole [1 ]
Mazzei, Daniele [1 ]
De Rossi, Danilo [1 ]
机构
[1] Univ Pisa, Res Ctr E Piaggio, Pisa, Italy
来源
JOURNAL OF HUMAN-ROBOT INTERACTION | 2014年 / 3卷 / 02期
关键词
human-robot interaction; affective computing; multimodal approach; physiological signals;
D O I
10.5898/JHRI.3.2.Lazzeri
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Human-Robot Interaction (HRI) studies have recently received increasing attention in various fields, from academic communities to engineering firms and the media. Many researchers have been focusing on the development of tools to evaluate the performance of robotic systems and studying how to extend the range of robot interaction modalities and contexts. Because people are emotionally engaged when interacting with computers and robots, researchers have been focusing attention on the study of affective human-robot interaction. This new field of study requires the integration of various approaches typical of different research backgrounds, such as psychology and engineering, to gain more insight into the human-robot affective interaction. In this paper, we report the development of a multimodal acquisition platform called HIPOP (Human Interaction Pervasive Observation Platform). HIPOP is a modular data-gathering platform based on various hardware and software units that can be easily used to create a custom acquisition setup for HRI studies. The platform uses modules for physiological signals, eye gaze, video and audio acquisition to perform an integrated affective and behavioral analysis. It is also possible to include new hardware devices into the platform. The open-source hardware and software revolution has made many high-quality commercial and open-source products freely available for HRI and HCI research. These devices are currently most often used for data acquisition and robot control, and they can be easily included in HIPOP. Technical tests demonstrated the ability of HIPOP to reliably acquire a large set of data in terms of failure management and data synchronization. The platform was able to automatically recover from errors and faults without affecting the entire system, and the misalignment observed in the acquired data was not significant and did not affect the multimodal analysis. HIPOP was also tested in the context of the FACET (FACE Therapy) project, in which a humanoid robot called FACE (Facial Automaton for Conveying Emotions) was used to convey affective stimuli to children with autism. In the FACET project, psychologists without technical skills were able to use HIPOP to collect the data needed for their experiments without dealing with hardware issues, data integration challenges, or synchronization problems. The FACET case study highlighted the real core feature of the HIPOP platform (i.e., multimodal data integration and fusion). This analytical approach allowed psychologists to study both behavioral and psychophysiological reactions to obtain a more complete view of the subjects' state during interaction with the robot. These results indicate that HIPOP could become an innovative tool for HRI affective studies aimed at inferring a more detailed view of a subject's feelings and behavior during interaction with affective and empathic robots.
引用
收藏
页码:1 / 24
页数:24
相关论文
共 50 条
  • [41] Minotaurus: A System for Affective Human-Robot Interaction in Smart Environments
    Roning, Juha
    Holappa, Jukka
    Kellokumpu, Vili
    Tikanmaki, Antti
    Pietikainen, Matti
    [J]. COGNITIVE COMPUTATION, 2014, 6 (04) : 940 - 953
  • [42] Haptic Human-Robot Affective Interaction in a Handshaking Social Protocol
    Ammi, Mehdi
    Demulier, Virginie
    Caillou, Sylvain
    Gaffary, Yoren
    Tsalamlal, Yacine
    Martin, Jean-Claude
    Tapus, Adriana
    [J]. PROCEEDINGS OF THE 2015 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'15), 2015, : 263 - 270
  • [43] Human-Robot Interaction and Collaborative Manipulation with Multimodal Perception Interface for Human
    Huang, Shouren
    Ishikawa, Masatoshi
    Yamakawa, Yuji
    [J]. PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON HUMAN-AGENT INTERACTION (HAI'19), 2019, : 289 - 291
  • [44] Multimodal Human-Robot Interaction from the Perspective of a Speech Scientist
    Rigoll, Gerhard
    [J]. SPEECH AND COMPUTER (SPECOM 2015), 2015, 9319 : 3 - 10
  • [45] Comparing alternative modalities in the context of multimodal human-robot interaction
    Saren, Suprakas
    Mukhopadhyay, Abhishek
    Ghose, Debasish
    Biswas, Pradipta
    [J]. JOURNAL ON MULTIMODAL USER INTERFACES, 2024, 18 (01) : 69 - 85
  • [46] Evaluations of embedded Modules dedicated to multimodal Human-Robot Interaction
    Burger, Brice
    Lerasle, Frederic
    Ferrane, Isabelle
    [J]. RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2, 2009, : 341 - +
  • [47] Real-time Framework for Multimodal Human-Robot Interaction
    Gast, Juergen
    Bannat, Alexander
    Rehrl, Tobias
    Wallhoff, Frank
    Rigoll, Gerhard
    Wendt, Cornelia
    Schmidt, Sabrina
    Popp, Michael
    Faerber, Berthold
    [J]. HSI: 2009 2ND CONFERENCE ON HUMAN SYSTEM INTERACTIONS, 2009, : 273 - 280
  • [48] Research on multimodal human-robot interaction based on speech and gesture
    Deng Yongda
    Li Fang
    Xin Huang
    [J]. COMPUTERS & ELECTRICAL ENGINEERING, 2018, 72 : 443 - 454
  • [49] Multimodal emotion recognition with evolutionary computation for human-robot interaction
    Perez-Gaspar, Luis-Alberto
    Caballero-Morales, Santiago-Omar
    Trujillo-Romero, Felipe
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2016, 66 : 42 - 61
  • [50] Speech to Head Gesture Mapping in Multimodal Human-Robot Interaction
    Aly, Amir
    Tapus, Adriana
    [J]. SERVICE ORIENTATION IN HOLONIC AND MULTI-AGENT MANUFACTURING CONTROL, 2012, 402 : 183 - 196