Visual saliency assistance mechanism based on visually impaired navigation systems

被引:2
|
作者
Lu, Fangfang [1 ]
Lian, Yingjie [1 ]
Jin, Bei [2 ]
Gu, Weiyan [2 ]
机构
[1] Shanghai Univ Elect Power, Coll Comp Sci & Technol, Shanghai, Peoples R China
[2] Wenzhou Med Univ, Taizhou Hosp, Dept Oral & Maxillofacial Surg, Taizhou, Zhejiang, Peoples R China
关键词
Visual impairment; Visual saliency; Assistance mechanism; PREDICTION; ATTENTION; PEOPLE; SCENE; MODEL;
D O I
10.1016/j.displa.2023.102482
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Visually impaired people face significant challenges in their work, studies, and daily lives. Nowadays, numerous visually impaired navigation devices are available to help visually impaired people solve daily life problems. These devices typically include modules for target recognition, distance measurement, and text reading aloud. They are designed to help visually impaired people avoid obstacles and understand the presence of things around them through text reading aloud. Due to the need to avoid all potential obstacles, the target recognition algorithms embedded in these devices must recognize a wide range of targets. However, the text reading aloud module cannot read all of them. Therefore, we designed a visual saliency assistance mechanism that simulates the regions that humans may pay the most attention to in the whole picture. The output of the visual saliency assistance mechanism is overlaid with the target recognition result, which can greatly reduce the target number of text reading aloud. This way, the visually impaired navigation device can not only help to avoid obstacles but also help visually impaired people understand the interest targets of most people in the whole picture. The visual saliency assistance mechanism we designed consists of three components: a spatio-temporal feature extraction (STFE) module, a spatio-temporal feature fusion (STFF) module, and a multi-scale feature fusion (MSFF) module. The STFF module fuses long-term spatio-temporal features and improves the temporal memory between frames. The MSFF module fully integrates information at different scales to improve the accuracy of saliency prediction. Therefore, this proposed visual saliency model can assist in the efficient operation of visually impaired navigation systems. The area under roc curve judd (AUC-J) metric of the proposed model was 93.9%, 93.8%, and 91.5% on three widely used saliency datasets: Holly-wood2, UCF Sports, and DHF1K, respectively. The results show that our proposed model outperforms the current state-of-the-art models.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Sensing and Navigation of Wearable Assistance Cognitive Systems for the Visually Impaired
    Li, Guoxin
    Xu, Jiaqi
    Li, Zhijun
    Chen, Chao
    Kan, Zhen
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 15 (01) : 122 - 133
  • [2] Assistive Systems for Visually Impaired Persons: Challenges and Opportunities for Navigation Assistance
    Okolo, Gabriel Iluebe
    Althobaiti, Turke
    Ramzan, Naeem
    [J]. SENSORS, 2024, 24 (11)
  • [3] Review on LiDAR-Based Navigation Systems for the Visually Impaired
    Jain M.
    Patel W.
    [J]. SN Computer Science, 4 (4)
  • [4] Navigation and space perception assistance for the visually impaired: The NAVIG project
    Kammoun, S.
    Parseihian, G.
    Gutierrez, O.
    Brilhault, A.
    Serpa, A.
    Raynal, M.
    Oriola, B.
    Mace, M. J. -M.
    Auvray, M.
    Denis, M.
    Thorpe, S. J.
    Truillet, P.
    Katz, B. F. G.
    Jouffrais, C.
    [J]. IRBM, 2012, 33 (02) : 182 - 189
  • [5] Study for acceptance on new navigation assistance by visually impaired people
    Paajala, Iikka J.
    Keranen, Niina
    [J]. 2015 9TH INTERNATIONAL SYMPOSIUM ON MEDICAL INFORMATION AND COMMUNICATION TECHNOLOGY (ISMICT), 2015, : 64 - 67
  • [6] LiDAR-Based Obstacle Detection and Distance Estimation in Navigation Assistance for Visually Impaired
    Kuriakose, Bineeth
    Shrestha, Raju
    Sandnes, Frode Eika
    [J]. UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION: USER AND CONTEXT DIVERSITY, UAHCI 2022, PT II, 2022, 13309 : 479 - 491
  • [7] Device for visual kinesthetic navigation of the blind and visually impaired
    Stopar, Kristjan
    [J]. 20TH IEEE MEDITERRANEAN ELETROTECHNICAL CONFERENCE (IEEE MELECON 2020), 2020, : 646 - 651
  • [8] Classification Criteria for Local Navigation Digital Assistance Techniques for the Visually Impaired
    Amin, Navya
    Borschbach, Markus
    [J]. 2014 13TH INTERNATIONAL CONFERENCE ON CONTROL AUTOMATION ROBOTICS & VISION (ICARCV), 2014, : 1724 - 1728
  • [9] Design and Evaluation of Vibrating Footwear for Navigation Assistance to Visually Impaired People
    Xu, Qianli
    Gan, Tian
    Chia, Shue Ching
    Li, Liyuan
    Lim, Joo-Hwee
    Kyaw, Phyoe Kyaw
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON INTERNET OF THINGS (ITHINGS) AND IEEE GREEN COMPUTING AND COMMUNICATIONS (GREENCOM) AND IEEE CYBER, PHYSICAL AND SOCIAL COMPUTING (CPSCOM) AND IEEE SMART DATA (SMARTDATA), 2016, : 305 - 310
  • [10] Impact of Expertise on Interaction Preferences for Navigation Assistance of Visually Impaired Individuals
    Ahmetovic, Dragan
    Guerreiro, Jodo
    Ohn-Bar, Eshed
    Kitani, Kris M.
    Asakawa, Chieko
    [J]. 16TH INTERNATIONAL WEB FOR ALL CONFERENCE (WEB4ALL), 2019,