Pixel-level intelligent recognition of concrete cracks based on DRACNN

被引:5
|
作者
Cui, Xiaoning [1 ]
Wang, Qicai [1 ,2 ]
Dai, Jinpeng [1 ,2 ]
Li, Sheng [1 ]
Xie, Chao [1 ]
Wang, Jianqiang [1 ]
机构
[1] Lanzhou Jiaotong Univ, Sch Civil Engn, Lanzhou 730070, Peoples R China
[2] Natl & Prov Joint Engn Lab Rd & Bridge Disaster P, Lanzhou 730070, Peoples R China
基金
中国国家自然科学基金;
关键词
Artificial intelligence; Machine learning; Surfaces; Concrete crack identification;
D O I
10.1016/j.matlet.2021.130867
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Materials surface damage identification based on computer vision technology has become one of the research hotspots in the field of materials surface. Crack is one of the most common forms of material damage. It is of great significance to carry out intelligent recognition of cracks to identify and estimate evolution of material damage. In order to improve the accuracy of intelligent crack recognition, a deep residual attention convolution neural network (DRACNN) was proposed for semantic segmentation of concrete cracks. DRACNN network is based on U-Net and adds recursive residual convolution block and attention mechanism in U-Net for more accurate intelligent crack recognition at pixel-level. Through the comparison with other mainstream semantic segmentation algorithms, it is found that the proposed DRACNN can achieve better classification performance for concrete cracks, and the IoU, accuracy, precision, and recall of the DRACNN are 73.95%, 97.82%, 78.48%, and 67.95% respectively.
引用
收藏
页数:4
相关论文
共 50 条
  • [21] Pixel-Level and Feature-Level Domain Adaptation for Heterogeneous SAR Target Recognition
    Chen, Zhuo
    Zhao, Lingjun
    He, Qishan
    Kuang, Gangyao
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [22] Pixel-Level Face Image Quality Assessment for Explainable Face Recognition
    Terhoerst, Philipp
    Huber, Marco
    Damer, Naser
    Kirchbuchner, Florian
    Raja, Kiran
    Kuijper, Arjan
    IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, 2023, 5 (02): : 288 - 297
  • [23] Pixel-level fusion: a PDE-based approach
    Pop, Sorin
    Terebes, Romulus
    Borda, Monica
    Lavialle, Olivier
    ISSCS 2007: INTERNATIONAL SYMPOSIUM ON SIGNALS, CIRCUITS AND SYSTEMS, VOLS 1 AND 2, 2007, : 549 - +
  • [24] Pixel-Level Degradation for Text Image Super-Resolution and Recognition
    Qian, Xiaohong
    Xie, Lifeng
    Ye, Ning
    Le, Renlong
    Yang, Shengying
    ELECTRONICS, 2023, 12 (21)
  • [25] Pixel-Level Intelligent Segmentation and Measurement Method for Pavement Multiple Damages Based on Mobile Deep Learning
    Dong, Jiaxiu
    Li, Zhaonan
    Wang, Zibin
    Wang, Niannian
    Guo, Wentong
    Ma, Duo
    Hu, Haobang
    Zhong, Shan
    IEEE ACCESS, 2021, 9 : 143860 - 143876
  • [26] Pixel-level multicategory detection of visible seismic damage of reinforced concrete components
    Miao, Zenghui
    Ji, Xiaodong
    Okazaki, Taichiro
    Takahashi, Noriyuki
    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2021, 36 (05) : 620 - 637
  • [27] Concrete crack pixel-level segmentation: a comparison of scene illumination angle of incidence
    Dow, Hamish
    Perry, Marcus
    McAlorum, Jack
    Pennada, Sanjeetha
    e-Journal of Nondestructive Testing, 2024, 29 (07):
  • [28] Algorithm for pixel-level concrete pavement crack segmentation based on an improved U-Net model
    Zhang, Zixuan
    He, Yike
    Hu, Di
    Jin, Qiang
    Zhou, Manxu
    Liu, Zongwei
    Chen, Hongli
    Wang, He
    Xiang, Xinchen
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [29] Reflective Field for Pixel-Level Tasks
    Zhang, Liang
    Kong, Xiangwen
    Shen, Peiyi
    Zhu, Guangming
    Song, Juan
    Shah, Syed Afaq Ali
    Bennamoun, Mohammed
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 529 - 534
  • [30] Explaining Face Recognition Through SHAP-Based Pixel-Level Face Image Quality Assessment
    Biagi, Clara
    Rethfeld, Louis
    Kuijper, Arjan
    Terhoerst, Philipp
    2023 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS, IJCB, 2023,