Intelligent segmentation of wildfire region and interpretation of fire front in visible light images from the viewpoint of an unmanned aerial vehicle (UAV)

被引:0
|
作者
Li, Jianwei [1 ]
Wan, Jiali [1 ]
Sun, Long [2 ]
Hu, Tongxin [2 ]
Li, Xingdong [3 ]
Zheng, Huiru [4 ]
机构
[1] Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou 350116, Peoples R China
[2] Northeast Forestry Univ, Coll Forestry, Key Lab Sustainable Forest Ecosyst Management, Harbin 150040, Peoples R China
[3] Northeast Forestry Univ, Coll Mech & Elect Engn, 26 Hexing Rd, Harbin 150040, Peoples R China
[4] Ulster Univ, Sch Comp, Belfast BT15 1ED, North Ireland
基金
中国博士后科学基金;
关键词
Attention mechanism; Convolutional neural network; Deep learning; Wildfire segmentation; Fire front interpretation; Unmanned aerial vehicle; ALGORITHM; SPREAD; YOLO;
D O I
10.1016/j.isprsjprs.2024.12.025
中图分类号
P9 [自然地理学];
学科分类号
0705 ; 070501 ;
摘要
The acceleration of global warming and intensifying global climate anomalies have led to a rise in the frequency of wildfires. However, most existing research on wildfire fields focuses primarily on wildfire identification and prediction, with limited attention given to the intelligent interpretation of detailed information, such as fire front within fire region. To address this gap, advance the analysis of fire front in UAV-captured visible images, and facilitate future calculations of fire behavior parameters, a new method is proposed for the intelligent segmentation and fire front interpretation of wildfire regions. This proposed method comprises three key steps: deep learning-based fire segmentation, boundary tracking of wildfire regions, and fire front interpretation. Specifically, the YOLOv7-tiny model is enhanced with a Convolutional Block Attention Module (CBAM), which integrates channel and spatial attention mechanisms to improve the model's focus on wildfire regions and boost the segmentation precision. Experimental results show that the proposed method improved detection and segmentation precision by 3.8 % and 3.6 %, respectively, compared to existing approaches, and achieved an average segmentation frame rate of 64.72 Hz, which is well above the 30 Hz threshold required for real-time fire segmentation. Furthermore, the method's effectiveness in boundary tracking and fire front interpreting was validated using an outdoor grassland fire fusion experiment's real fire image data. Additional tests were conducted in southern New South Wales, Australia, using data that confirmed the robustness of the method in accurately interpreting the fire front. The findings of this research have potential applications in dynamic data-driven forest fire spread modeling and fire digital twinning areas. The code and dataset are publicly available at https://github.com/makemoneyokk/fire-segmentation-interpretation.git.
引用
收藏
页码:473 / 489
页数:17
相关论文
共 33 条
  • [21] Extraction method for single Zanthoxylum bungeanum in karst mountain area based on unmanned aerial vehicle visible-light images
    Zhu, Meng
    Zhou, Zhongfa
    Huang, Denghong
    Peng, Ruiwen
    Zhang, Yang
    Li, Yongliu
    Zhang, Wenhui
    JOURNAL OF APPLIED REMOTE SENSING, 2021, 15 (02)
  • [22] Classification of urban feature from unmanned aerial vehicle images using GASVM integration and multi-scale segmentation
    Modiri, M.
    Salehabadi, A.
    Mohebbi, M.
    Hashemi, A. M.
    Masumi, M.
    INTERNATIONAL CONFERENCE ON SENSORS & MODELS IN REMOTE SENSING & PHOTOGRAMMETRY, 2015, 41 (W5): : 479 - 485
  • [23] Depth Semantic Segmentation of Tobacco Planting Areas from Unmanned Aerial Vehicle Remote Sensing Images in Plateau Mountains
    Huang, Liang
    Wu, Xuequn
    Peng, Qiuzhi
    Yu, Xueqin
    JOURNAL OF SPECTROSCOPY, 2021, 2021
  • [24] SoybeanNet: Transformer-based convolutional neural network for soybean pod counting from Unmanned Aerial Vehicle (UAV) images
    Li, Jiajia
    Magar, Raju Thada
    Chen, Dong
    Lin, Feng
    Wang, Dechun
    Yin, Xiang
    Zhuang, Weichao
    Li, Zhaojian
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2024, 220
  • [25] Luotuo Mountain Waste Dump Cover Interpretation Combining Deep Learning and VDVI Based on Data from an Unmanned Aerial Vehicle (UAV)
    Wang, Yilin
    Yin, Dongxu
    Lou, Liming
    Li, Xinying
    Cheng, Pengle
    Huang, Ying
    REMOTE SENSING, 2022, 14 (16)
  • [26] A Novel Desert Vegetation Extraction and Shadow Separation Method Based on Visible Light Images from Unmanned Aerial Vehicles
    Lu, Yuefeng
    Song, Zhenqi
    Li, Yuqing
    An, Zhichao
    Zhao, Lan
    Zan, Guosheng
    Lu, Miao
    SUSTAINABILITY, 2023, 15 (04)
  • [27] Early season detection of rice plants using RGB, NIR-G-B and multispectral images from unmanned aerial vehicle (UAV)
    Zheng, Hengbiao
    Zhou, Xiang
    He, Jiaoyang
    Yao, Xia
    Cheng, Tao
    Zhu, Yan
    Cao, Weixing
    Tian, Yongchao
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2020, 169 (169)
  • [28] Unmanned aerial vehicle (UAV) derived structure-from-motion photogrammetry point clouds for oil palm (Elaeis guineensis) canopy segmentation and height estimation
    Fawcett, Dominic
    Azlan, Benjamin
    Hill, Timothy C.
    Kho, Lip Khoon
    Bennie, Jon
    Anderson, Karen
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2019, 40 (19) : 7538 - 7560
  • [29] Detection and Segmentation of Vine Canopy in Ultra-High Spatial Resolution RGB Imagery Obtained from Unmanned Aerial Vehicle (UAV): A Case Study in a Commercial Vineyard
    Poblete-Echeverria, Carlos
    Federico Olmedo, Guillermo
    Ingram, Ben
    Bardeen, Matthew
    REMOTE SENSING, 2017, 9 (03)
  • [30] Evaluating the Potentiality of Using Control-free Images from a Mini Unmanned Aerial Vehicle (UAV) and Structure-from-Motion (SfM) Photogrammetry to Measure Paleoseismic Offsets
    Li, Xue
    Xiong, Baosong
    Yuan, Zhaode
    He, Kefeng
    Liu, Xiaoli
    Liu, Zhumei
    Shen, Zhaoqing
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2021, 42 (07) : 2417 - 2439