Rapid mapping of volcanic eruption building damage: A model based on prior knowledge and few-shot fine-tuning

被引:2
|
作者
Wang, Zeyu [1 ,2 ]
Zhang, Feng [1 ,2 ]
Wu, Chuyi [1 ,2 ]
Xia, Junshi [3 ]
机构
[1] Zhejiang Univ, Sch Earth Sci, Hangzhou 310027, Zhejiang, Peoples R China
[2] Zhejiang Prov Key Lab Geog Informat Sci, Hangzhou 310027, Zhejiang, Peoples R China
[3] RIKEN Ctr Adv Intelligence Project, Geoinformat Team, Tokyo 1030027, Japan
关键词
Building damage; Volcanic eruption; Few-shot transfer learning; Siamese network; ASH; PATAGONIA; IMPACTS; PROXY; SO2;
D O I
10.1016/j.jag.2023.103622
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Large-scale volcanic eruptions always inflict severe damage upon facilities and cause significant environmental pollution. Building damage caused by lava flow and volcanic ash coverage can reflect the infrastructure devastation and victim scope within the affected region. The application of machine learning methods for building damage automated identification from remote sensing imagery typically relies on a large number of training samples. However, labeled data scarcity is a common issue in the field of disasters, particularly in volcanic eruptions. To address this, we propose a two-stage building damage quick mapping workflow, which combines a building localization model trained on prior knowledge and a damage classification model fine-tuned on few-shot volcanic eruption-related samples. The classification model utilizes a CNN-based Siamese network for bi-temporal image feature extraction and comparison, with the backbone initialized with pre-trained weights from ImageNet. We conducted building damage classification tasks for single-disaster scenarios and cross-disaster domain scenarios in the eruptions of Mount Semeru, Tonga, and ST. Vincent; the visual damage level of each building was used as ground truth. The results demonstrate that our model can identify building damage efficiently and accurately in different volcanic eruption scenarios, with an over 93% F1-score on the 2-way 20-shot tasks. Furthermore, though building samples from different volcanic eruption regions present cross-domain challenges, our model can adapt to different feature domains by being supplemented with a few samples of another volcanic eruption disaster. Additionally, in the case of Mount Semeru Eruption, we gain insights into the potential of building damage statistics in post-eruption environmental assessments. To further enhance the model robustness on mixed-domain samples and multi-level damage classification tasks, issues including sample bias of certain disaster sources should be addressed.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning
    Oh, Jaehoon
    Kim, Sungnyun
    Ho, Namgyu
    Kim, Jin-Hwa
    Song, Hwanjun
    Yun, Se-Young
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 4359 - 4363
  • [42] Anchoring Fine-tuning of Sentence Transformer with Semantic Label Information for Efficient Truly Few-shot Classification
    Pauli, Amalie Brogaard
    Derczynski, Leon
    Assent, Ira
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 11254 - 11264
  • [43] Towards Foundation Models and Few-Shot Parameter-Efficient Fine-Tuning for Volumetric Organ Segmentation
    Silva-Rodriguez, Julio
    Dolz, Jose
    Ben Ayed, Ismail
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023 WORKSHOPS, 2023, 14393 : 213 - 224
  • [44] Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference
    Hu, Shell Xu
    Li, Da
    Stuhmer, Jan
    Kim, Minyoung
    Hospedales, Timothy M.
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 9058 - 9067
  • [45] Cold-Start Data Selection for Better Few-shot Language Model Fine-tuning: A Prompt-based Uncertainty Propagation Approach
    Yu, Yue
    Zhang, Rongzhi
    Xu, Ran
    Zhang, Jieyu
    Shen, Jiaming
    Zhang, Chao
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 2499 - 2521
  • [46] An Empirical Evaluation of the Zero-Shot, Few-Shot, and Traditional Fine-Tuning Based Pretrained Language Models for Sentiment Analysis in Software Engineering
    Shafikuzzaman, Md
    Islam, Md Rakibul
    Rolli, Alex C.
    Akhter, Sharmin
    Seliya, Naeem
    IEEE ACCESS, 2024, 12 : 109714 - 109734
  • [47] LM-BFF-MS: Improving Few-Shot Fine-tuning of Language Models based on Multiple Soft Demonstration Memory
    Park, Eunhwan
    Jeon, Donghyeon
    Kim, Seonhoon
    Kang, Inho
    Na, Seung-Hoon
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022): (SHORT PAPERS), VOL 2, 2022, : 310 - 317
  • [48] LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning
    Abaskohi, Amirhossein
    Rothe, Sascha
    Yaghoobzadeh, Yadollah
    61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023,
  • [49] KNOWLEDGE-BASED FINE-GRAINED CLASSIFICATION FOR FEW-SHOT LEARNING
    Zhao, Jiabao
    Lin, Xin
    Zhou, Jie
    Yang, Jing
    He, Liang
    Yang, Zhaohui
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [50] Prior knowledge-based DMV model for few-shot and multi-category wood recognition
    Niu, Jiashun
    Zhuang, Pengyan
    Wang, Bingzhen
    You, Guanglin
    Sun, Jianping
    He, Tuo
    WOOD SCIENCE AND TECHNOLOGY, 2024, 58 (04) : 1517 - 1533