Assistive Navigation Using Deep Reinforcement Learning Guiding Robot With UWB/Voice Beacons and Semantic Feedbacks for Blind and Visually Impaired People

被引:17
|
作者
Lu, Chen-Lung [1 ,2 ]
Liu, Zi-Yan [1 ,2 ]
Huang, Jui-Te [1 ,2 ]
Huang, Ching-, I [1 ,2 ]
Wang, Bo-Hui [1 ,2 ]
Chen, Yi [1 ,2 ]
Wu, Nien-Hsin [3 ]
Wang, Hsueh-Cheng [1 ,2 ]
Giarre, Laura [4 ]
Kuo, Pei-Yi [3 ]
机构
[1] Natl Chiao Tung Univ, Inst Elect & Control Engn, Dept Elect & Comp Engn, Hsinchu, Taiwan
[2] Natl Yang Ming Chiao Tung Univ, Inst Elect & Control Engn, Dept Elect & Comp Engn, Hsinchu, Taiwan
[3] Natl Tsing Hua Univ, Inst Serv Sci, Coll Technol Management, Hsinchu, Taiwan
[4] Univ Modena & Reggio Emilia, Dept Engn, Modena, Italy
来源
FRONTIERS IN ROBOTICS AND AI | 2021年 / 8卷
关键词
UWB beacon; navigation; blind and visually impaired; guiding robot; verbal instruction; indoor navigation; deep reinforcement learning;
D O I
10.3389/frobt.2021.654132
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Facilitating navigation in pedestrian environments is critical for enabling people who are blind and visually impaired (BVI) to achieve independent mobility. A deep reinforcement learning (DRL)-based assistive guiding robot with ultrawide-bandwidth (UWB) beacons that can navigate through routes with designated waypoints was designed in this study. Typically, a simultaneous localization and mapping (SLAM) framework is used to estimate the robot pose and navigational goal; however, SLAM frameworks are vulnerable in certain dynamic environments. The proposed navigation method is a learning approach based on state-of-the-art DRL and can effectively avoid obstacles. When used with UWB beacons, the proposed strategy is suitable for environments with dynamic pedestrians. We also designed a handle device with an audio interface that enables BVI users to interact with the guiding robot through intuitive feedback. The UWB beacons were installed with an audio interface to obtain environmental information. The on-handle and on-beacon verbal feedback provides points of interests and turn-by-turn information to BVI users. BVI users were recruited in this study to conduct navigation tasks in different scenarios. A route was designed in a simulated ward to represent daily activities. In real-world situations, SLAM-based state estimation might be affected by dynamic obstacles, and the visual-based trail may suffer from occlusions from pedestrians or other obstacles. The proposed system successfully navigated through environments with dynamic pedestrians, in which systems based on existing SLAM algorithms have failed.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] A Soft Actor-Critic Deep Reinforcement-Learning-Based Robot Navigation Method Using LiDAR
    Liu, Yanjie
    Wang, Chao
    Zhao, Changsen
    Wu, Heng
    Wei, Yanlong
    REMOTE SENSING, 2024, 16 (12)
  • [42] Dynamic warning zone and a short-distance goal for autonomous robot navigation using deep reinforcement learning
    Montero, Estrella Elvia
    Mutahira, Husna
    Pico, Nabih
    Muhammad, Mannan Saeed
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (01) : 1149 - 1166
  • [43] Robot Navigation among External Autonomous Agents through Deep Reinforcement Learning using Graph Attention Network
    Zhang, Tianle
    Qiu, Tenghai
    Pu, Zhiqiang
    Liu, Zhen
    Yi, Jianqiang
    IFAC PAPERSONLINE, 2020, 53 (02): : 9465 - 9470
  • [44] Mobile Robot Navigation Based on Deep Reinforcement Learning with 2D-LiDAR Sensor using Stochastic Approach
    Beomsoo, Han
    Ravankar, Ankit A.
    Emaru, Takanori
    2021 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENCE AND SAFETY FOR ROBOTICS (ISR), 2021, : 417 - 422
  • [45] Deep Trail-Following Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired - Learning from Virtual and Real Worlds
    Chuang, Tzu-Kuan
    Lin, Ni-Ching
    Chen, Jih-Shi
    Hung, Chen-Hao
    Huang, Yi-Wei
    Teng, Chunchih
    Huang, Haikun
    Yu, Lap-Fai
    Giarre, Laura
    Wang, Hsueh-Cheng
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 5849 - 5855
  • [46] Human and environmental feature-driven neural network for path-constrained robot navigation using deep reinforcement learning
    Pico, Nabih
    Montero, Estrella
    Amirbek, Alisher
    Auh, Eugene
    Jeon, Jeongmin
    Alvarez-Alvarado, Manuel S.
    Jamil, Babar
    Algabri, Redhwan
    Moon, Hyungpil
    ENGINEERING SCIENCE AND TECHNOLOGY-AN INTERNATIONAL JOURNAL-JESTECH, 2025, 64
  • [47] Group-Aware Robot Navigation in Crowds Using Spatio-Temporal Graph Attention Network With Deep Reinforcement Learning
    Lu, Xiaojun
    Faragasso, Angela
    Wang, Yongdong
    Yamashita, Atsushi
    Asama, Hajime
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (04): : 4140 - 4147
  • [48] Visual Target-Driven Robot Crowd Navigation with Limited FOV Using Self-Attention Enhanced Deep Reinforcement Learning
    Li, Yinbei
    Lyu, Qingyang
    Yang, Jiaqiang
    Salam, Yasir
    Wang, Baixiang
    SENSORS, 2025, 25 (03)
  • [49] Deep Learning-Based Fake-Banknote Detection for the Visually Impaired People Using Visible-Light Images Captured by Smartphone Cameras
    Pham, Tuyen Danh
    Park, Chanhum
    Nguyen, Dat Tien
    Batchuluun, Ganbayar
    Park, Kang Ryoung
    IEEE ACCESS, 2020, 8 : 63144 - 63161
  • [50] Image Visual Sensor Used in Health-Care Navigation in Indoor Scenes Using Deep Reinforcement Learning (DRL) and Control Sensor Robot for Patients Data Health Information
    Seaman, Walead Kaled
    Yavuz, Sirma
    JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS, 2021, 11 (01) : 104 - 113