RAFNet: Interdomain Representation Alignment and Fine-Tuning for Image Series Classification

被引:1
|
作者
Gong, Maoguo [1 ]
Qiao, Wenyuan [1 ]
Li, Hao [1 ]
Qin, A. K. [2 ]
Gao, Tianqi [1 ]
Luo, Tianshi [1 ]
Xing, Lining [1 ]
机构
[1] Xidian Univ, Sch Elect Engn, Key Lab Collaborat Intelligence Syst, Minist Educ, Xian 710071, Peoples R China
[2] Swinburne Univ Technol, Dept Comp Technol, Hawthorn, Vic 3122, Australia
基金
中国国家自然科学基金;
关键词
Domain adaptation (DA); fine-tuning; image series classification; remote sensing; CHANGE VECTOR ANALYSIS; LAND-COVER MAPS; TIME-SERIES; DOMAIN ADAPTATION;
D O I
10.1109/TGRS.2023.3302430
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Classification of remote sensing image series, which differs in quality and details, has important implications for the analysis of land cover, whereas it is expensive and time-consuming as a result of manual annotations. Fortunately, domain adaptation (DA) provides an outstanding solution to the problem. However, information loss while aligning two distributions often exists in traditional DA methods, which impacts the effect of classification with DA. To alleviate this issue, an inter-domain representation alignment and fine-tuning-based network (RAFNet) is proposed for image series classification. Interdomain representation alignment, which is fulfilled by a variational autoencoder (VAE) trained by both source and target data, encourages reducing the discrepancy between the two marginal distributions of different domains and simultaneously preserving more data properties. As a result, RAFNet, which fuses the multiscale aligned representations, performs classification tasks in the target domain after being well-trained with supervised learning in the source domain. Specifically, the multiscale aligned representations of RAFNet are acquired by duplicating the frozen encoder of VAE. Then, an information-based loss function is designed to fine-tune RAFNet, in which both the unchanged information and the changed information implied in change maps are completely used to learn the discriminative features better and make the model more generalized for the target domain. Finally, experiment studies on three datasets validate the effectiveness of RAFNet with considerable segmentation accuracy even though the target data have no access to any annotated information.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Incorporating Scenario Knowledge into A Unified Fine-tuning Architecture for Event Representation
    Zheng, Jianming
    Cai, Fei
    Chen, Honghui
    PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 249 - 258
  • [42] COMPRESSING DEEP CNNS USING BASIS REPRESENTATION AND SPECTRAL FINE-TUNING
    Tayyab, Muhammad
    Khan, Fahad Ahmad
    Mahalanobis, Abhijit
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3537 - 3541
  • [43] Fine-tuning pre-trained neural networks for medical image classification in small clinical datasets
    Newton Spolaôr
    Huei Diana Lee
    Ana Isabel Mendes
    Conceição Veloso Nogueira
    Antonio Rafael Sabino Parmezan
    Weber Shoity Resende Takaki
    Claudio Saddy Rodrigues Coy
    Feng Chung Wu
    Rui Fonseca-Pinto
    Multimedia Tools and Applications, 2024, 83 (9) : 27305 - 27329
  • [44] An Effective One-Shot Neural Architecture Search Method with Supernet Fine-Tuning for Image Classification
    Yuan, Gonglin
    Xue, Bing
    Zhang, Mengjie
    PROCEEDINGS OF THE 2023 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, GECCO 2023, 2023, : 615 - 623
  • [45] Fine-tuning pre-trained neural networks for medical image classification in small clinical datasets
    Spolaor, Newton
    Lee, Huei Diana
    Mendes, Ana Isabel
    Nogueira, Conceicao Veloso
    Sabino Parmezan, Antonio Rafael
    Resende Takaki, Weber Shoity
    Rodrigues Coy, Claudio Saddy
    Wu, Feng Chung
    Fonseca-Pinto, Rui
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (09) : 27305 - 27329
  • [46] Time series classification with their image representation
    Homenda, Wladyslaw
    Jastrzebska, Agnieszka
    Pedrycz, Witold
    Wrzesien, Mariusz
    NEUROCOMPUTING, 2024, 573
  • [47] Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety Alignment
    Wang, Jiongxiao
    Li, Jiazhao
    Li, Yiquan
    Qi, Xiangyu
    Hu, Junjie
    Li, Yixuan
    McDaniel, Patrick
    Chen, Muhao
    Li, Bo
    Xiao, Chaowei
    arXiv,
  • [48] Using Optimal Transport as Alignment Objective for fine-tuning Multilingual Contextualized Embeddings
    Alqahtani, Sawsan
    Lalwani, Garima
    Zhang, Yi
    Romeo, Salvatore
    Mansour, Saab
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 3904 - 3919
  • [49] An Adaptive Approach for Anomaly Detector Selection and Fine-Tuning in Time Series
    Ye, Hui
    Ma, Xiaopeng
    Pan, Qingfeng
    Fang, Huaqiang
    Xiang, Hang
    Shao, Tongzhen
    1ST INTERNATIONAL WORKSHOP ON DEEP LEARNING PRACTICE FOR HIGH-DIMENSIONAL SPARSE DATA WITH KDD (DLP-KDD 2019), 2019,
  • [50] FINE-TUNING TRANSFER LEARNING MODEL IN WOVEN FABRIC PATTERN CLASSIFICATION
    Noprisson H.
    Ermatita E.
    Abdiansah A.
    Ayumi V.
    Purba M.
    Setiawan H.
    International Journal of Innovative Computing, Information and Control, 2022, 18 (06): : 1885 - 1894