RFRN: A recurrent feature refinement network for accurate and efficient scene text detection

被引:15
|
作者
Deng, Guanyu [1 ]
Ming, Yue [1 ]
Xue, Jing-Hao [2 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Elect Engn, Beijing Key Lab Work Safety & Intelligent Monitor, Beijing 100876, Peoples R China
[2] UCL, Dept Stat Sci, London WC1E 6BT, England
基金
北京市自然科学基金;
关键词
Scene text detection; Recurrent segmentation; Feature pyramid network; Feature refinement;
D O I
10.1016/j.neucom.2020.10.099
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Scene text detection plays a vital role for scene text understanding, but arbitrary-shaped text detection remains a significant challenge. To extract discriminative features, most recent state-of-the-art methods adopt heavy networks, resulting in parameter redundancy and inference inefficiency. For accurate and efficient scene text detection, in this paper we propose a novel recurrent feature refinement network (RFRN). RFRN, as a recurrent segmentation framework, contains a recurrent path augmentation that refines the previous feature maps as inner states, which not only helps improve the segmentation quality, but also fully facilitates the reuse of parameters and low computational cost. During testing, RFRN discards redundant prediction procedures for efficient inference, and achieves a good balance between speed and accuracy of inference. We conduct experiments on four challenging scene text benchmarks, CTW1500, Total-Text, ICDAR2015 and ICDAR2017-MLT, which include curved texts and multi-oriented texts with complex background. The results show that the proposed RFRN achieves competitive performance on detection accuracy while maintaining computational efficiency. (c) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:465 / 481
页数:17
相关论文
共 50 条
  • [1] Towards Accurate Scene Text Detection with Bidirectional Feature Pyramid Network
    Cao, Dongping
    Dang, Jiachen
    Zhong, Yong
    [J]. SYMMETRY-BASEL, 2021, 13 (03):
  • [2] Refinement Correction Network for Scene Text Detection
    Lian, Zhe
    Yin, Yanjun
    Hu, Wei
    Xu, Qiaozhi
    Zhi, Min
    Lu, Jingfang
    Qi, Xuanhao
    [J]. ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VIII, ICIC 2024, 2024, 14869 : 93 - 105
  • [3] R-Net: A Relationship Network for Efficient and Accurate Scene Text Detection
    Wang, Yuxin
    Xie, Hongtao
    Zha, Zhengjun
    Tian, Youliang
    Fu, Zilong
    Zhang, Yongdong
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 1316 - 1329
  • [4] FEATURE FUSION NETWORK FOR SCENE TEXT DETECTION
    Cai, Chenqin
    Lv, Pin
    Su, Bing
    [J]. 2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 2755 - 2759
  • [5] Accurate Scene Text Recognition Based on Recurrent Neural Network
    Su, Bolan
    Lu, Shijian
    [J]. COMPUTER VISION - ACCV 2014, PT I, 2015, 9003 : 35 - 48
  • [6] RECURRENT GLOBAL CONVOLUTIONAL NETWORK FOR SCENE TEXT DETECTION
    Mohanty, Sabyasachi
    Dutta, Tanima
    Gupta, Hari Prabhat
    [J]. 2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 2750 - 2754
  • [7] LRANet: Towards Accurate and Efficient Scene Text Detection with Low-Rank Approximation Network
    Su, Yuchen
    Chen, Zhineng
    Shao, Zhiwen
    Du, Yuning
    Ji, Zhilong
    Bai, Jinfeng
    Zhou, Yong
    Jiang, Yu-Gang
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4979 - 4987
  • [8] FTPN: Scene Text Detection With Feature Pyramid Based Text Proposal Network
    Liu, Fagui
    Chen, Cheng
    Gu, Dian
    Zheng, Jingzhong
    [J]. IEEE ACCESS, 2019, 7 : 44219 - 44228
  • [9] Conceptual text region network: Cognition-inspired accurate scene text detection
    Cui, Chenwei
    Lu, Liangfu
    Tan, Zhiyuan
    Hussain, Amir
    [J]. NEUROCOMPUTING, 2021, 464 : 252 - 264
  • [10] REFINETEXT: REFINING MULTI-ORIENTED SCENE TEXT DETECTION WITH A FEATURE REFINEMENT MODULE
    Xie, Pengyuan
    Xiao, Jing
    Cao, Yang
    Zhu, Jia
    Khan, Asad
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 1756 - 1761