Rapid mapping of volcanic eruption building damage: A model based on prior knowledge and few-shot fine-tuning

被引:2
|
作者
Wang, Zeyu [1 ,2 ]
Zhang, Feng [1 ,2 ]
Wu, Chuyi [1 ,2 ]
Xia, Junshi [3 ]
机构
[1] Zhejiang Univ, Sch Earth Sci, Hangzhou 310027, Zhejiang, Peoples R China
[2] Zhejiang Prov Key Lab Geog Informat Sci, Hangzhou 310027, Zhejiang, Peoples R China
[3] RIKEN Ctr Adv Intelligence Project, Geoinformat Team, Tokyo 1030027, Japan
关键词
Building damage; Volcanic eruption; Few-shot transfer learning; Siamese network; ASH; PATAGONIA; IMPACTS; PROXY; SO2;
D O I
10.1016/j.jag.2023.103622
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Large-scale volcanic eruptions always inflict severe damage upon facilities and cause significant environmental pollution. Building damage caused by lava flow and volcanic ash coverage can reflect the infrastructure devastation and victim scope within the affected region. The application of machine learning methods for building damage automated identification from remote sensing imagery typically relies on a large number of training samples. However, labeled data scarcity is a common issue in the field of disasters, particularly in volcanic eruptions. To address this, we propose a two-stage building damage quick mapping workflow, which combines a building localization model trained on prior knowledge and a damage classification model fine-tuned on few-shot volcanic eruption-related samples. The classification model utilizes a CNN-based Siamese network for bi-temporal image feature extraction and comparison, with the backbone initialized with pre-trained weights from ImageNet. We conducted building damage classification tasks for single-disaster scenarios and cross-disaster domain scenarios in the eruptions of Mount Semeru, Tonga, and ST. Vincent; the visual damage level of each building was used as ground truth. The results demonstrate that our model can identify building damage efficiently and accurately in different volcanic eruption scenarios, with an over 93% F1-score on the 2-way 20-shot tasks. Furthermore, though building samples from different volcanic eruption regions present cross-domain challenges, our model can adapt to different feature domains by being supplemented with a few samples of another volcanic eruption disaster. Additionally, in the case of Mount Semeru Eruption, we gain insights into the potential of building damage statistics in post-eruption environmental assessments. To further enhance the model robustness on mixed-domain samples and multi-level damage classification tasks, issues including sample bias of certain disaster sources should be addressed.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Fine-Tuning of CLIP in Few-Shot Scenarios via Supervised Contrastive Learning
    Luo, Jing
    Wu, Guangxing
    Liu, Hongmei
    Wang, Ruixuan
    PATTERN RECOGNITION AND COMPUTER VISION, PT III, PRCV 2024, 2025, 15033 : 104 - 117
  • [22] A fine-tuning approach based on spatio-temporal features for few-shot video object detection
    Cores, Daniel
    Seidenari, Lorenzo
    Del Bimbo, Alberto
    Brea, Victor M.
    Mucientes, Manuel
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 146
  • [23] Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot Learning
    Shen, Zhiqiang
    Liu, Zechun
    Qin, Jie
    Savvides, Marios
    Cheng, Kwang-Ting
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 9594 - 9602
  • [24] Data race detection via few-shot parameter-efficient fine-tuning
    Shen, Yuanyuan
    Peng, Manman
    Zhang, Fan
    Wu, Qiang
    JOURNAL OF SYSTEMS AND SOFTWARE, 2025, 222
  • [25] A fine-tuning prototypical network for few-shot cross-domain fault diagnosis
    Zhong, Jianhua
    Gu, Kairong
    Jiang, Haifeng
    Liang, Wei
    Zhong, Shuncong
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2024, 35 (11)
  • [26] Improving Zero and Few-Shot Abstractive Summarization with Intermediate Fine-tuning and Data Augmentation
    Fabbri, Alexander R.
    Han, Simeng
    Li, Haoyuan
    Li, Haoran
    Ghazvininejad, Marjan
    Joty, Shafiq
    Radev, Dragomir
    Mehdad, Yashar
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 704 - 717
  • [27] COMPARING THE EFFICACY OF FINE-TUNING AND META-LEARNING FOR FEW-SHOT POLICY IMITATION
    Patacchiola, Massimiliano
    Sun, Mingfei
    Hofmann, Katja
    Turner, Richard E.
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 232, 2023, 232 : 878 - 908
  • [28] Boosting Transductive Few-Shot Fine-tuning with Margin-based Uncertainty Weighting and Probability Regularization
    Tao, Ran
    Chen, Hao
    Savvides, Marios
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 15752 - 15761
  • [29] Few-Shot Intent Detection via Contrastive Pre-Training and Fine-Tuning
    Zhang, Jian-Guo
    Bui, Trung
    Yoon, Seunghyun
    Chen, Xiang
    Liu, Zhiwei
    Xia, Congying
    Tran, Quan Hung
    Chang, Walter
    Yu, Philip
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1906 - 1912
  • [30] Few-Shot Adaptation to Unseen Conditions for Wireless-Based Human Activity Recognition Without Fine-Tuning
    Zhang, Xiaotong
    Hu, Qingqiao
    Xiao, Zhen
    Sun, Tao
    Zhang, Jiaxi
    Zhang, Jin
    Li, Zhenjiang
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (02) : 585 - 599