Semantic Scene Understanding for Human-Robot Interaction

被引:0
|
作者
Patel, Maithili [1 ]
Dogan, Fethiye Irmak [2 ]
Zeng, Zhen [3 ]
Baraka, Kim [4 ]
Chernova, Sonia [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] KTH Royal Inst Technol, Stockholm, Sweden
[3] JP Morgan AI Res, New York, NY USA
[4] Vrije Univ VU Amsterdam, Amsterdam, Netherlands
关键词
scene semantics; robot learning; human-centered autonomy;
D O I
10.1145/3568294.3579960
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Service robots will be co-located with human users in an unstructured human-centered environment and will benefit from understanding the user's daily activities, preferences, and needs towards fully assisting them. This workshop aims to explore how abstract semantic knowledge of the user's environment can be used as a context in understanding and grounding information regarding the user's instructions, preferences, habits, and needs. While object semantics have primarily been investigated for robotics in the perception and manipulation domain, recent works have shown the benefits of semantic modeling in a Human-Robot Interaction (HRI) context toward understanding and assisting human users. This workshop focuses on semantic information that can be useful in generalizing and interpreting user instructions, modeling user activities, anticipating user needs, and making the internal reasoning processes of a robot more interpretable to a user. Therefore, the workshop builds on topics from prior workshops such as Learning in HRI1, behavior adaptation for assistance2, and learning from humans3 and aims at facilitating cross-pollination across these domains through a common thread of utilizing abstract semantics of the physical world towards robot autonomy in assistive applications. We envision the workshop to touch on research areas such as unobtrusive learning from observations, preference learning, continual learning, enhancing the transparency of autonomous robot behavior, and user adaptation. The workshop aims to gather researchers working on these areas and provide fruitful discussions towards autonomous assistive robots that can learn and ground scene semantics for enhancing HRI.
引用
收藏
页码:941 / 943
页数:3
相关论文
共 50 条
  • [1] Understanding Human-Robot Interaction in Virtual Reality
    Liu, Oliver
    Rakita, Daniel
    Mutlu, Bilge
    Gleicher, Michael
    [J]. 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 751 - 757
  • [2] Leveraging Foundation Models for Scene Understanding in Human-Robot Teaming
    Handelman, David A.
    Rivera, Corban G.
    Paul, William A.
    Badger, Andrew R.
    Holmes, Emma A.
    Cervantes, Martha I.
    Kemp, Bethany G.
    Butler, Erin C.
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS VI, 2024, 13051
  • [3] Enhanced Visual Scene Understanding through Human-Robot Dialog
    Johnson-Roberson, Matthew
    Bohg, Jeannette
    Skantze, Gabriel
    Gustafson, Joakim
    Carlson, Rolf
    Rasolzadeh, Babak
    Kragic, Danica
    [J]. 2011 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, 2011, : 3342 - 3348
  • [4] Scene Understanding for Safety Analysis in Human-Robot Collaborative Operations
    Riaz, Hassam
    Terra, Ahmad
    Raizer, Klaus
    Inam, Rafia
    Hata, Alberto
    [J]. 2020 6TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND ROBOTICS (ICCAR), 2020, : 722 - 731
  • [5] Vision-Based Holistic Scene Understanding for Context-Aware Human-Robot Interaction
    De Magistris, Giorgio
    Caprari, Riccardo
    Castro, Giulia
    Russo, Samuele
    Iocchi, Luca
    Nardi, Daniele
    Napoli, Christian
    [J]. AIXIA 2021 - ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13196 : 310 - 325
  • [6] Understanding the Perception of Incremental Robot Response in Human-Robot Interaction
    Jensen, Lars Christian
    Langedijk, Rosalyn Melissa
    Fischer, Kerstin
    [J]. 2020 29TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2020, : 41 - 47
  • [7] Online intention learning for human-robot interaction by scene observation
    Awais, Muhammad
    Henrich, Dominik
    [J]. 2012 IEEE WORKSHOP ON ADVANCED ROBOTICS AND ITS SOCIAL IMPACTS (ARSO), 2012, : 13 - 18
  • [8] Scene Perception and Recognition in Industrial Environments for Human-Robot Interaction
    Somani, Nikhil
    Dean-Leon, Emmanuel
    Cai, Caixia
    Knoll, Alois
    [J]. ADVANCES IN VISUAL COMPUTING, ISVC 2013, PT I, 2013, 8033 : 373 - 384
  • [9] Understanding Instructions on Large Scale for Human-Robot Interaction
    Xie, Jiongkun
    Chen, Xiaoping
    [J]. 2014 IEEE/WIC/ACM INTERNATIONAL JOINT CONFERENCES ON WEB INTELLIGENCE (WI) AND INTELLIGENT AGENT TECHNOLOGIES (IAT), VOL 3, 2014, : 175 - 182
  • [10] Human-Robot Interaction by Understanding Upper Body Gestures
    Xiao, Yang
    Zhang, Zhijun
    Beck, Aryel
    Yuan, Junsong
    Thalmann, Daniel
    [J]. PRESENCE-VIRTUAL AND AUGMENTED REALITY, 2014, 23 (02): : 133 - 154