DRAGON: A Dialogue-Based Robot for Assistive Navigation With Visual Language Grounding

被引:1
|
作者
Liu, Shuijing [1 ]
Hasan, Aamir [1 ]
Hong, Kaiwen [1 ]
Wang, Runxuan [1 ]
Chang, Peixin [1 ]
Mizrachi, Zachary [1 ]
Lin, Justin [1 ]
McPherson, D. Livingston [1 ]
Rogers, Wendy A. [2 ]
Driggs-Campbell, Katherine [1 ]
机构
[1] Univ Illinois, Dept Elect & Comp Engn, Champaign, IL 61820 USA
[2] Univ Illinois, Dept Appl Hlth Sci, Champaign, IL 61820 USA
关键词
Human-centered robotics; natural dialog for HRI; AI-enabled robotics;
D O I
10.1109/LRA.2024.3362591
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Persons with visual impairments (PwVI) have difficulties understanding and navigating spaces around them. Current wayfinding technologies either focus solely on navigation or provide limited communication about the environment. Motivated by recent advances in visual-language grounding and semantic navigation, we propose DRAGON, a guiding robot powered by a dialogue system and the ability to associate the environment with natural language. By understanding the commands from the user, DRAGON is able to guide the user to the desired landmarks on the map, describe the environment, and answer questions from visual observations. Through effective utilization of dialogue, the robot can ground the user's free-form language to the environment, and give the user semantic information through spoken language. We conduct a user study with blindfolded participants in an everyday indoor environment. Our results demonstrate that DRAGON is able to communicate with the user smoothly, provide a good guiding experience, and connect users with their surrounding environment in an intuitive manner.
引用
收藏
页码:3712 / 3719
页数:8
相关论文
共 50 条
  • [31] Robot Target Localization and Visual Navigation Based on Neural Network
    Guo, Haifeng
    Wang, Yiyang
    Yu, Guijun
    Li, Xiang
    Yu, Baoqi
    Li, Wenyi
    JOURNAL OF SENSORS, 2022, 2022
  • [32] A purposive strategy for visual-based navigation of a mobile robot
    Vassallo, RF
    Schneebeli, HJ
    Santos-Victor, J
    1998 MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS, PROCEEDINGS, 1999, : 334 - 337
  • [33] Visual features based motion controller for mobile robot navigation
    Jafar, Fairul Azni
    Zakaria, Nurul Azma
    Yokota, Kazutaka
    International Journal of Simulation: Systems, Science and Technology, 2014, 15 (01): : 7 - 14
  • [34] Spatial language for route-based humanoid robot navigation
    Elmogy, Mohammed
    Habel, Christopher
    Zhang, Jianwei
    COGNITIVE PROCESSING, 2009, 10 : S152 - S152
  • [35] Object-Based Reliable Visual Navigation for Mobile Robot
    Wang, Fan
    Zhang, Chaofan
    Zhang, Wen
    Fang, Cuiyun
    Xia, Yingwei
    Liu, Yong
    Dong, Hao
    SENSORS, 2022, 22 (06)
  • [36] Semantic Map Based Robot Navigation with Natural Language Input
    Yang, Guang
    Huang, Xinchi
    Guo, Yi
    2024 33RD IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, ROMAN 2024, 2024, : 1689 - 1696
  • [37] Spatial language for route-based humanoid robot navigation
    Elmogy, Mohammed
    Habel, Christopher
    Zhang, Jianwei
    COGNITIVE PROCESSING, 2009, 10 : S208 - S211
  • [38] The Control System Design of Automatic Weeding Robot Based on Visual Navigation
    Qin, Chuanbo
    Du, Qiliang
    Tian, Lianfang
    Huang, Xiaogang
    2012 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO 2012), 2012,
  • [39] Visual attention-based robot navigation using information sampling
    Winters, N
    Santos-Victor, J
    IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4: EXPANDING THE SOCIETAL ROLE OF ROBOTICS IN THE NEXT MILLENNIUM, 2001, : 1670 - 1675
  • [40] Door recognition and deep learning algorithm for visual based robot navigation
    Chen, Wei
    Qu, Ting
    Zhou, Yimin
    Weng, Kaijian
    Wang, Gang
    Fu, Guoqiang
    2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS IEEE-ROBIO 2014, 2014, : 1793 - 1798