Development and Testing of a Multimodal Acquisition Platform for Human-Robot Interaction Affective Studies

被引:13
|
作者
Lazzeri, Nicole [1 ]
Mazzei, Daniele [1 ]
De Rossi, Danilo [1 ]
机构
[1] Univ Pisa, Res Ctr E Piaggio, Pisa, Italy
来源
JOURNAL OF HUMAN-ROBOT INTERACTION | 2014年 / 3卷 / 02期
关键词
human-robot interaction; affective computing; multimodal approach; physiological signals;
D O I
10.5898/JHRI.3.2.Lazzeri
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Human-Robot Interaction (HRI) studies have recently received increasing attention in various fields, from academic communities to engineering firms and the media. Many researchers have been focusing on the development of tools to evaluate the performance of robotic systems and studying how to extend the range of robot interaction modalities and contexts. Because people are emotionally engaged when interacting with computers and robots, researchers have been focusing attention on the study of affective human-robot interaction. This new field of study requires the integration of various approaches typical of different research backgrounds, such as psychology and engineering, to gain more insight into the human-robot affective interaction. In this paper, we report the development of a multimodal acquisition platform called HIPOP (Human Interaction Pervasive Observation Platform). HIPOP is a modular data-gathering platform based on various hardware and software units that can be easily used to create a custom acquisition setup for HRI studies. The platform uses modules for physiological signals, eye gaze, video and audio acquisition to perform an integrated affective and behavioral analysis. It is also possible to include new hardware devices into the platform. The open-source hardware and software revolution has made many high-quality commercial and open-source products freely available for HRI and HCI research. These devices are currently most often used for data acquisition and robot control, and they can be easily included in HIPOP. Technical tests demonstrated the ability of HIPOP to reliably acquire a large set of data in terms of failure management and data synchronization. The platform was able to automatically recover from errors and faults without affecting the entire system, and the misalignment observed in the acquired data was not significant and did not affect the multimodal analysis. HIPOP was also tested in the context of the FACET (FACE Therapy) project, in which a humanoid robot called FACE (Facial Automaton for Conveying Emotions) was used to convey affective stimuli to children with autism. In the FACET project, psychologists without technical skills were able to use HIPOP to collect the data needed for their experiments without dealing with hardware issues, data integration challenges, or synchronization problems. The FACET case study highlighted the real core feature of the HIPOP platform (i.e., multimodal data integration and fusion). This analytical approach allowed psychologists to study both behavioral and psychophysiological reactions to obtain a more complete view of the subjects' state during interaction with the robot. These results indicate that HIPOP could become an innovative tool for HRI affective studies aimed at inferring a more detailed view of a subject's feelings and behavior during interaction with affective and empathic robots.
引用
收藏
页码:1 / 24
页数:24
相关论文
共 50 条
  • [1] Affective Human-Robot Interaction with Multimodal Explanations
    Zhu, Hongbo
    Yu, Chuang
    Cangelosi, Angelo
    [J]. SOCIAL ROBOTICS, ICSR 2022, PT I, 2022, 13817 : 241 - 252
  • [2] Knowledge acquisition through human-robot multimodal interaction
    Randelli, Gabriele
    Bonanni, Taigo Maria
    Iocchi, Luca
    Nardi, Daniele
    [J]. INTELLIGENT SERVICE ROBOTICS, 2013, 6 (01) : 19 - 31
  • [3] Multimodal Approach to Affective Human-Robot Interaction Design with Children
    Okita, Sandra Y.
    Ng-Thow-Hing, Victor
    Sarvadevabhatla, Ravi K.
    [J]. ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2011, 1 (01)
  • [4] Affective Grounding in Human-Robot Interaction
    Jung, Malte F.
    [J]. PROCEEDINGS OF THE 2017 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'17), 2017, : 263 - 273
  • [5] Active Inference Through Energy Minimization in Multimodal Affective Human-Robot Interaction
    Horii, Takato
    Nagai, Yukie
    [J]. FRONTIERS IN ROBOTICS AND AI, 2021, 8
  • [6] Multimodal Interaction for Human-Robot Teams
    Burke, Dustin
    Schurr, Nathan
    Ayers, Jeanine
    Rousseau, Jeff
    Fertitta, John
    Carlin, Alan
    Dumond, Danielle
    [J]. UNMANNED SYSTEMS TECHNOLOGY XV, 2013, 8741
  • [7] Affective state estimation for human-robot interaction
    Kulic, Dana
    Croft, Elizabeth A.
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2007, 23 (05) : 991 - 1000
  • [8] Body Language in Affective Human-Robot Interaction
    Stoeva, Darja
    Gelautz, Margrit
    [J]. HRI'20: COMPANION OF THE 2020 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2020, : 606 - 608
  • [9] Towards the Development of Affective Facial Expression Recognition for Human-Robot Interaction
    Faria, Diego Resende
    Vieira, Mario
    Faria, Fernanda C. C.
    [J]. 10TH ACM INTERNATIONAL CONFERENCE ON PERVASIVE TECHNOLOGIES RELATED TO ASSISTIVE ENVIRONMENTS (PETRA 2017), 2017, : 300 - 304
  • [10] Recent advancements in multimodal human-robot interaction
    Su, Hang
    Qi, Wen
    Chen, Jiahao
    Yang, Chenguang
    Sandoval, Juan
    Laribi, Med Amine
    [J]. FRONTIERS IN NEUROROBOTICS, 2023, 17