Core-View Contrastive Learning Network for Building Lightweight Cross-Domain Consultation System

被引:0
|
作者
Zheng, Jiabin [1 ]
Xu, Fangyi [2 ]
Chen, Wei [1 ]
Fang, Zihao [3 ]
Yao, Jiahui [4 ]
机构
[1] Peking Univ, Sch Comp Sci, Beijing 100871, Peoples R China
[2] Commun Univ China, Inst Commun Studies, Beijing 100024, Peoples R China
[3] Hong Kong Univ Sci & Technol, Sch Sci, Hong Kong, Peoples R China
[4] Peking Univ, Inst Social Sci Survey, Beijing 100871, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
基金
中国国家社会科学基金;
关键词
Task analysis; Semantics; Self-supervised learning; Adaptation models; Computational modeling; Transforms; Transfer learning; Cross-domain consultation system; cross-domain text matching; multi-view learning; contrastive learning; sentence semantic representation;
D O I
10.1109/ACCESS.2024.3395330
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cross-domain Consultation Systems have become essential in numerous critical applications, for instance, an online citizen complaint system. However, addressing complaints with distinct orality characteristics often necessitates retrieving and integrating knowledge from diverse professional domains. This scenario represents a typical cross-domain problem. Nevertheless, the prevailing approach of utilizing generative large language models to tackle this problem presents challenges including model scale and drawbacks like hallucination and limited interpretability. To address these challenges, we proposed a novel approach called the Core-View Contrastive Learning (CVCL) network. Leveraging contrastive learning techniques with an integrated core-adaptive augmentation module, the CVCL network achieves accuracy in cross-domain information matching. Our objective is to construct a lightweight, precise, and interpretable cross-domain consultation system, overcoming the limitations encountered with large language models in addressing such challenges. Empirical validation of our proposed method using real-world datasets demonstrates its effectiveness. Our experiments show that the proposed method achieves comparable performance to large language models in terms of accuracy in text-matching tasks and surpasses the best baseline model by over 24 percentage points in F1-score for classification tasks. Additionally, our lightweight model achieved a performance level of 96% compared to the full model, while utilizing only 6% of the parameters.
引用
下载
收藏
页码:65615 / 65629
页数:15
相关论文
共 50 条
  • [31] A Cross-Domain Decision Support System to Optimize Building Maintenance
    Moretti, Nicola
    Cecconi, Fulvio Re
    BUILDINGS, 2019, 9 (07)
  • [32] A cross-domain decision support system to optimize building maintenance
    Moretti N.
    Re Cecconi F.
    Buildings, 7
  • [33] Image Cross-Domain Translation Algorithm Based on Self-Similarity and Contrastive Learning
    Zhao L.
    Zhang H.
    Xing W.
    Lin Z.
    Lin H.
    Lu D.
    Pan X.
    Xu D.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2023, 60 (04): : 930 - 946
  • [34] Discriminative Feature Selection for Multi-View Cross-Domain Learning
    Fang, Zheng
    Zhang, Zhongfei
    PROCEEDINGS OF THE 22ND ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM'13), 2013, : 1321 - 1330
  • [35] An improved cross-domain sequential recommendation model based on intra-domain and inter-domain contrastive learning
    Ni, Jianjun
    Shen, Tong
    Zhao, Yonghao
    Tang, Guangyi
    Gu, Yang
    COMPLEX & INTELLIGENT SYSTEMS, 2024, : 7877 - 7892
  • [36] Fusion of single-domain contrastive embedding and cross-domain graph collaborative filtering network for recommendation systems
    Huang, Zhenzhen
    Zhu, Dongqing
    Xiao, Shuo
    INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS, 2024,
  • [37] A novel lightweight relation network for cross-domain few-shot fault diagnosis
    Tang, Tang
    Qiu, Chuanhang
    Yang, Tianyuan
    Wang, Jingwei
    Zhao, Jun
    Chen, Ming
    Wu, Jie
    Wang, Liang
    MEASUREMENT, 2023, 213
  • [38] CDANER: Contrastive Learning with Cross-domain Attention for Few-shot Named Entity Recognition
    Li, Wei
    Li, Hui
    Ge, Jingguo
    Zhang, Lei
    Li, Liangxiong
    Wu, Bingzhen
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [39] Cross-Domain 3D Model Retrieval Based On Contrastive Learning and Label Propagation
    Song, Dan
    Yang, Yue
    Nie, Weizhi
    Li, Xuanya
    Liu, An-An
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022,
  • [40] CoConGAN: Cooperative contrastive learning for few-shot cross-domain heterogeneous face translation
    Yinghui Zhang
    Wansong Hu
    Bo Sun
    Jun He
    Lejun Yu
    Neural Computing and Applications, 2023, 35 : 15019 - 15032