Explore Innovative Depth Vision Models with Domain Adaptation

被引:0
|
作者
Xu, Wenchao [1 ]
Wang, Yangxu [2 ]
机构
[1] Nanfang Coll Guangzhou, Sch Elect & Comp Engn, Guangzhou 510970, Conghua, Peoples R China
[2] Software Engn Inst Guangzhou, Dept Network Technol, Guangzhou 510990, Conghua, Peoples R China
关键词
Deep learning; neural network; domain adaptation; lightweight; regularization techniques;
D O I
10.14569/IJACSA.2024.0150151
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years, deep learning has garnered widespread attention in graph -structured data. Nevertheless, due to the high cost of collecting labeled graph data, domain adaptation becomes particularly crucial in supervised graph learning tasks. The performance of existing methods may degrade when there are disparities between training and testing data, especially in challenging scenarios such as remote sensing image analysis. In this study, an approach to achieving high-quality domain adaptation without explicit adaptation was explored. The proposed Efficient Lightweight Aggregation Network (ELANet) model addresses domain adaptation challenges in graph -structured data by employing an efficient lightweight architecture and regularization techniques. Through experiments on real datasets, ELANet demonstrated robust domain adaptability and generality, performing exceptionally well in cross -domain settings of remote sensing images. Furthermore, the research indicates that regularization techniques play a crucial role in mitigating the model's sensitivity to domain differences, especially when incorporating a module that adjusts feature weights in response to redefined features. Moreover, the study finds that under the same training and validation set configurations, the model achieves better training outcomes with appropriate data transformation strategies. The achievements of this research extend not only to the agricultural domain but also show promising results in various object detection scenarios, contributing to the advancement of domain adaptation research.
引用
收藏
页码:533 / 539
页数:7
相关论文
共 50 条
  • [1] An In-Depth Analysis of Domain Adaptation in Computer and Robotic Vision
    Tanveer, Muhammad Hassan
    Fatima, Zainab
    Zardari, Shehnila
    Guerra-Zubiaga, David
    Troncossi, Marco
    APPLIED SCIENCES-BASEL, 2023, 13 (23):
  • [2] Domain knowledge boosted adaptation: Leveraging vision-language models for multi-source domain adaptation
    He, Yuwei
    Feng, Juexiao
    Ding, Guiguang
    Guo, Yuchen
    He, Tao
    NEUROCOMPUTING, 2025, 619
  • [3] Innovative strategies for adaptation to loss of vision
    Riazi, Abbas
    Dain, Stephen J.
    Boon, Mei Ying
    Bridge, Catherine
    CLINICAL AND EXPERIMENTAL OPTOMETRY, 2011, 94 (01) : 98 - 102
  • [4] TasselLFANetV2: Exploring Vision Models Adaptation in Cross-Domain
    Yu, Zhenghong
    Ye, Jianxiong
    Liufu, Shengjie
    Lu, Dunlu
    Zhou, Huabing
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2024, 21
  • [5] Advances in domain adaptation for computer vision
    Shamsolmoali, Pourya
    Garcia, Salvador
    Zhou, Huiyu
    Celebi, M. Emre
    IMAGE AND VISION COMPUTING, 2021, 114
  • [6] Advances in domain adaptation for computer vision
    Shamsolmoali, Pourya
    García, Salvador
    Zhou, Huiyu
    Celebi, M. Emre
    Image and Vision Computing, 2021, 114
  • [7] Enhancing Vision-Language Models Incorporating TSK Fuzzy System for Domain Adaptation
    Shi, Kuo
    Lu, Jie
    Fang, Zhen
    Zhang, Guangquan
    2024 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, FUZZ-IEEE 2024, 2024,
  • [8] Low Rank Adaptation for Stable Domain Adaptation of Vision Transformers
    Filatov, N.
    Kindulov, M.
    OPTICAL MEMORY AND NEURAL NETWORKS, 2023, 32 (Suppl 2) : S277 - S283
  • [9] Low Rank Adaptation for Stable Domain Adaptation of Vision Transformers
    N. Filatov
    M. Kindulov
    Optical Memory and Neural Networks, 2023, 32 : S277 - S283
  • [10] Vision transformers in domain adaptation and domain generalization: a study of robustness
    Alijani, Shadi
    Fayyad, Jamil
    Najjaran, Homayoun
    Neural Computing and Applications, 2024, 36 (29) : 17979 - 18007