A Novel SGD-U-Network-Based Pixel-Level Road Crack Segmentation and Classification

被引:2
|
作者
Sekar, Aravindkumar [1 ]
Perumal, Varalakshmi [1 ]
机构
[1] Anna Univ, Dept Comp Technol, MIT Campus, Chennai 60044, Tamil Nadu, India
来源
COMPUTER JOURNAL | 2023年 / 66卷 / 07期
关键词
road crack detection; road crack segmentation; deep learning; Stack Generative adversarial network Discriminator-U-Network (SGD-U-Network); ALGORITHM;
D O I
10.1093/comjnl/bxac029
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Automatic road crack detection plays a major role in developing an intelligent transportation system. The traditional approach of in-situ inspection is expensive and requires more man-power. In-order to solve this problem, a novel approach for automatic road crack segmentation was developed using Stack Generative adversarial network Discriminator-U-Network (SGD-U-Network). We have collected 19 300 crack and non-crack images (MIT-CHN-ORR dataset) from the Outer Ring Road of Chennai, TamilNadu, India. The MIT-CHN-ORR dataset was initially pre-processed using traditional image processing techniques for ground truth image generation. A stage-I and stage-II stack Generative Adversarial Network (GAN) model was introduced for generating high-resolution non-crack images. Then, the extracted features from Stack GAN Discriminator of stage II (SGD2) was concatenated with every level of expansion path in SGD-U-Network for segmenting the crack regions of the input crack images. Also, multi-feature-based classifier was developed using the features extracted from SGD2 and the bottleneck layer of SGD-U-Network. Our proposed model was implemented on MIT-CHN-ORR dataset and also analyzed our model performance using other existing benchmark datasets. The experimental analysis showcased that the proposed method outperformed the other state-of-the-art approaches.
引用
收藏
页码:1595 / 1608
页数:14
相关论文
共 50 条
  • [41] ARF-Crack: rotation invariant deep fully convolutional network for pixel-level crack detection
    Fu-Chen Chen
    Mohammad R. Jahanshahi
    Machine Vision and Applications, 2020, 31
  • [42] Pixel-Level Fatigue Crack Segmentation in Large-Scale Images of Steel Structures Using an Encoder-Decoder Network
    Dong, Chuanzhi
    Li, Liangding
    Yan, Jin
    Zhang, Zhiming
    Pan, Hong
    Catbas, Fikret Necati
    SENSORS, 2021, 21 (12)
  • [43] Automatic Pixel-Level Pavement Crack Recognition Using a Deep Feature Aggregation Segmentation Network with a scSE Attention Mechanism Module
    Qiao, Wenting
    Liu, Qiangwei
    Wu, Xiaoguang
    Ma, Biao
    Li, Gang
    SENSORS, 2021, 21 (09)
  • [44] Automatic Pixel-Level Crack Detection on Dam Surface Using Deep Convolutional Network
    Feng, Chuncheng
    Zhang, Hua
    Wang, Haoran
    Wang, Shuang
    Li, Yonglong
    SENSORS, 2020, 20 (07)
  • [45] MiniCrack: A simple but efficient convolutional neural network for pixel-level narrow crack detection
    Lan, Zhi-Xiong
    Dong, Xue-Mei
    COMPUTERS IN INDUSTRY, 2022, 141
  • [46] Semi-Supervised Pixel-Level Scene Text Segmentation by Mutually Guided Network
    Wang, Chuan
    Zhao, Shan
    Zhu, Li
    Luo, Kunming
    Guo, Yanwen
    Wang, Jue
    Liu, Shuaicheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 8212 - 8221
  • [47] Fisher vector representation based on pixel-level objectness for image classification
    Tuo, Hongya, 1600, Binary Information Press (10):
  • [48] DeepUNet: A Deep Fully Convolutional Network for Pixel-Level Sea-Land Segmentation
    Li, Ruirui
    Liu, Wenjie
    Yang, Lei
    Sun, Shihao
    Hu, Wei
    Zhang, Fan
    Li, Wei
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2018, 11 (11) : 3954 - 3962
  • [49] PIXEL-LEVEL TEXTURE SEGMENTATION BASED AV1 VIDEO COMPRESSION
    Chen, Di
    Chen, Qingshuang
    Zhu, Fengqing
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1622 - 1626
  • [50] Clothing Extraction using Region-based Segmentation and Pixel-level Refinement
    Liu, Zhao-Rui
    Wu, Xiao
    Zhao, Bo
    Peng, Qiang
    2014 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM), 2014, : 303 - 310