Development and Testing of a Multimodal Acquisition Platform for Human-Robot Interaction Affective Studies

被引:13
|
作者
Lazzeri, Nicole [1 ]
Mazzei, Daniele [1 ]
De Rossi, Danilo [1 ]
机构
[1] Univ Pisa, Res Ctr E Piaggio, Pisa, Italy
来源
JOURNAL OF HUMAN-ROBOT INTERACTION | 2014年 / 3卷 / 02期
关键词
human-robot interaction; affective computing; multimodal approach; physiological signals;
D O I
10.5898/JHRI.3.2.Lazzeri
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Human-Robot Interaction (HRI) studies have recently received increasing attention in various fields, from academic communities to engineering firms and the media. Many researchers have been focusing on the development of tools to evaluate the performance of robotic systems and studying how to extend the range of robot interaction modalities and contexts. Because people are emotionally engaged when interacting with computers and robots, researchers have been focusing attention on the study of affective human-robot interaction. This new field of study requires the integration of various approaches typical of different research backgrounds, such as psychology and engineering, to gain more insight into the human-robot affective interaction. In this paper, we report the development of a multimodal acquisition platform called HIPOP (Human Interaction Pervasive Observation Platform). HIPOP is a modular data-gathering platform based on various hardware and software units that can be easily used to create a custom acquisition setup for HRI studies. The platform uses modules for physiological signals, eye gaze, video and audio acquisition to perform an integrated affective and behavioral analysis. It is also possible to include new hardware devices into the platform. The open-source hardware and software revolution has made many high-quality commercial and open-source products freely available for HRI and HCI research. These devices are currently most often used for data acquisition and robot control, and they can be easily included in HIPOP. Technical tests demonstrated the ability of HIPOP to reliably acquire a large set of data in terms of failure management and data synchronization. The platform was able to automatically recover from errors and faults without affecting the entire system, and the misalignment observed in the acquired data was not significant and did not affect the multimodal analysis. HIPOP was also tested in the context of the FACET (FACE Therapy) project, in which a humanoid robot called FACE (Facial Automaton for Conveying Emotions) was used to convey affective stimuli to children with autism. In the FACET project, psychologists without technical skills were able to use HIPOP to collect the data needed for their experiments without dealing with hardware issues, data integration challenges, or synchronization problems. The FACET case study highlighted the real core feature of the HIPOP platform (i.e., multimodal data integration and fusion). This analytical approach allowed psychologists to study both behavioral and psychophysiological reactions to obtain a more complete view of the subjects' state during interaction with the robot. These results indicate that HIPOP could become an innovative tool for HRI affective studies aimed at inferring a more detailed view of a subject's feelings and behavior during interaction with affective and empathic robots.
引用
收藏
页码:1 / 24
页数:24
相关论文
共 50 条
  • [21] Affective Facial Expressions Recognition for Human-Robot Interaction
    Faria, Diego R.
    Vieira, Mario
    Faria, Fernanda C. C.
    Premebida, Cristiano
    [J]. 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 805 - 810
  • [22] Continual Learning for Adaptive Affective Human-Robot Interaction
    Maharjan, Rahul Singh
    [J]. 2022 10TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS, ACIIW, 2022,
  • [23] MULTIMODAL HUMAN ACTION RECOGNITION IN ASSISTIVE HUMAN-ROBOT INTERACTION
    Rodomagoulakis, I.
    Kardaris, N.
    Pitsikalis, V.
    Mavroudi, E.
    Katsamanis, A.
    Tsiami, A.
    Maragos, P.
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 2702 - 2706
  • [24] A unified multimodal control framework for human-robot interaction
    Cherubini, Andrea
    Passama, Robin
    Fraisse, Philippe
    Crosnier, Andre
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2015, 70 : 106 - 115
  • [25] Multimodal Engagement Prediction in Multiperson Human-Robot Interaction
    Abdelrahman, Ahmed A.
    Strazdas, Dominykas
    Khalifa, Aly
    Hintz, Jan
    Hempel, Thorsten
    Al-Hamadi, Ayoub
    [J]. IEEE ACCESS, 2022, 10 : 61980 - 61991
  • [26] Challenges of Multimodal Interaction in the Era of Human-Robot Coexistence
    Zhang, Zhengyou
    [J]. ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 2 - 2
  • [27] DiGeTac Unit for Multimodal Communication in Human-Robot Interaction
    Al, Gorkem Anil
    Martinez-Hernandez, Uriel
    [J]. IEEE SENSORS LETTERS, 2024, 8 (05) : 1 - 4
  • [28] A Multimodal Human-Robot Interaction Manager for Assistive Robots
    Abbasi, Bahareh
    Monaikul, Natawut
    Rysbek, Zhanibek
    Di Eugenio, Barbara
    Zefran, Milos
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 6756 - 6762
  • [29] Multimodal QOL Estimation During Human-Robot Interaction
    Nakagawa, Satoshi
    Kuniyoshi, Yasuo
    [J]. 2024 IEEE INTERNATIONAL CONFERENCE ON DIGITAL HEALTH, ICDH 2024, 2024, : 23 - 32
  • [30] Multimodal Target Prediction for Rapid Human-Robot Interaction
    Mitra, Mukund
    Patil, Ameya
    Mothish, G. V. S.
    Kumar, Gyanig
    Mukhopadhyay, Abhishek
    Murthy, L. R. D.
    Chakrabarti, Partha Pratim
    Biswas, Pradipta
    [J]. COMPANION PROCEEDINGS OF 2024 29TH ANNUAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2024 COMPANION, 2024, : 18 - 23