Research on Rock Crack Classification Based on Acoustic Emission Waveform Feature Extraction Technology

被引:0
|
作者
Ding, Ziwei [1 ]
Li, Xiaofei [1 ]
Tang, Qingbao [1 ]
Jia, Jindui [1 ]
Gao, Chengdeng [1 ]
Wang, Shaoyi [1 ]
机构
[1] Xian Univ Sci & Technol, Coll Energy Engn, Xian 710054, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Rock fracture mode has practical significance for the prediction and prevention of engineering disasters, and the inversion of fracture mode by the waveform signal not only reduces the experimental error of the mechanical strength measurement but also simplifies the type and quantity of disaster prediction source data. For the relationship between crack mode and mechanical strength, the acoustic emission (AE) waveform signal is studied. Six coarse-grained sandstone samples were tested by uniaxial compression, AE, and scanning electron microscopy. The results show that the number of microhole cracks in rock is positively correlated with tensile-shear cracks and negatively correlated with mechanical strength. The quadratic function regression curve of the proportion of shear cracks and mechanical strength is more realistic. When crack ratio is less than 0.31, the number of shear cracks is positively correlated with the mechanical strength and vice versa. The waveform mutation coefficient k is defined as the overall change description. It is found that the increase of signal mutation has a positive impact on the mechanical strength of rock. The fitting function of crack and the signal mutation near the peak of rock can be divided into six risk zones in a two-dimensional plane. In addition to these exciting results and discoveries, the determination of the number of tensile-shear cracks and its relationship with mechanical strength provide innovative methods and ideas for crack pattern discrimination and rock burst risk assessment of roadway surrounding rock.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Application of reassigned wavelet scalogram in feature extraction based on acoustic emission signal
    Liao, Chuanjun
    Li, Xuejun
    Liu, Deshun
    Jixie Gongcheng Xuebao/Journal of Mechanical Engineering, 2009, 45 (02): : 273 - 279
  • [32] Audio Feature Extraction and Classification Technology Based on Convolutional Neural Network
    Liu, Zhenfang
    JOURNAL OF ELECTRICAL SYSTEMS, 2024, 20 (09) : 1425 - 1431
  • [33] Application of STFT in feature extraction of acoustic emission signal
    Liao, Chuanjun
    Li, Xuejun
    Liu, Deshun
    Yi Qi Yi Biao Xue Bao/Chinese Journal of Scientific Instrument, 2008, 29 (09): : 1862 - 1867
  • [34] A Novel Research on Feature Extraction of Acoustic Targets based on Manifold Learning
    Liu, Hui
    Wang, Wei
    Yang, Jun-an
    Zhen, Liu
    2015 INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND APPLICATIONS (CSA), 2015, : 227 - 231
  • [35] Feature extraction of composite damage on acoustic emission signals
    Qin, H.W. (hwq402@163.com), 2013, Universitas Ahmad Dahlan (11):
  • [36] An Acoustic-Based Feature Extraction Method for the Classification of Moving Vehicles in the Wild
    Zhao, Qin
    Guo, Feng
    Zu, Xingshui
    Li, Baoqing
    Yuan, Xiaobing
    IEEE ACCESS, 2019, 7 (73666-73674) : 73666 - 73674
  • [37] The acoustic emission detection and localisation technology of the pipeline crack
    Wang, Xin-hua
    Jiao, Yu-lin
    Yang, Jie
    Niu, Yong-chao
    INTERNATIONAL JOURNAL OF SENSOR NETWORKS, 2016, 20 (02) : 111 - 118
  • [38] Research Trend on Acoustic Emission Technology
    Yoon, Dong-Jin
    Han, Byeong-Hee
    JOURNAL OF THE KOREAN SOCIETY FOR NONDESTRUCTIVE TESTING, 2011, 31 (05) : 567 - 571
  • [39] Feature Extraction of Binaural Recordings for Acoustic Scene Classification
    Zielinski, Slawomir K.
    Lee, Hyunkook
    PROCEEDINGS OF THE 2018 FEDERATED CONFERENCE ON COMPUTER SCIENCE AND INFORMATION SYSTEMS (FEDCSIS), 2018, : 585 - 588
  • [40] Constrained Learned Feature Extraction for Acoustic Scene Classification
    Zhang, Teng
    Wu, Ji
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2019, 27 (08) : 1216 - 1228