Multi-Source Domain Adaptation for Visual Sentiment Classification

被引:0
|
作者
Lin, Chuang [1 ]
Zhao, Sicheng [2 ]
Meng, Lei [1 ]
Chua, Tat-Seng [1 ]
机构
[1] Natl Univ Singapore, NExT, Singapore, Singapore
[2] Univ Calif Berkeley, Berkeley, CA 94720 USA
基金
新加坡国家研究基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing domain adaptation methods on visual sentiment classification typically are investigated under the single-source scenario, where the knowledge learned from a source domain of sufficient labeled data is transferred to the target domain of loosely labeled or unlabeled data. However, in practice, data from a single source domain usually have a limited volume and can hardly cover the characteristics of the target domain. In this paper, we propose a novel multi-source domain adaptation (MDA) method, termed Multi-source Sentiment Generative Adversarial Network (MSGAN), for visual sentiment classification. To handle data from multiple source domains, it learns to find a unified sentiment latent space where data from both the source and target domains share a similar distribution. This is achieved via cycle consistent adversarial learning in an end-to-end manner. Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-the-art MDA approaches for visual sentiment classification.
引用
收藏
页码:2661 / 2668
页数:8
相关论文
共 50 条
  • [1] Multi-source Domain Adaptation for Sentiment Classification with Granger Causal Inference
    Yang, Min
    Shen, Ying
    Chen, Xiaojun
    Li, Chengming
    [J]. PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 1913 - 1916
  • [2] Multi-source domain adaptation with joint learning for cross-domain sentiment classification
    Zhao, Chuanjun
    Wang, Suge
    Li, Deyu
    [J]. KNOWLEDGE-BASED SYSTEMS, 2020, 191
  • [3] Contrastive transformer based domain adaptation for multi-source cross-domain sentiment classification
    Fu, Yanping
    Liu, Yun
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 245
  • [4] Multi-source domain adaptation for image classification
    Karimpour, Morvarid
    Noori Saray, Shiva
    Tahmoresnezhad, Jafar
    Pourmahmood Aghababa, Mohammad
    [J]. MACHINE VISION AND APPLICATIONS, 2020, 31 (06)
  • [5] Multi-source domain adaptation for image classification
    Morvarid Karimpour
    Shiva Noori Saray
    Jafar Tahmoresnezhad
    Mohammad Pourmahmood Aghababa
    [J]. Machine Vision and Applications, 2020, 31
  • [6] Multi-source based approach for Visual Domain Adaptation
    Tiwari, Mrinalini
    Sanodiya, Rakesh Kumar
    Mathew, Jimson
    Saha, Sriparna
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [7] Iterative Refinement for Multi-Source Visual Domain Adaptation
    Wu, Hanrui
    Yan, Yuguang
    Lin, Guosheng
    Yang, Min
    Ng, Michael K.
    Wu, Qingyao
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (06) : 2810 - 2823
  • [8] Universal multi-Source domain adaptation for image classification
    Yin, Yueming
    Yang, Zhen
    Hu, Haifeng
    Wu, Xiaofu
    [J]. PATTERN RECOGNITION, 2022, 121
  • [9] Adversarial Training Based Multi-Source Unsupervised Domain Adaptation for Sentiment Analysis
    Dai, Yong
    Liu, Jian
    Ren, Xiancong
    Xu, Zenglin
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 7618 - 7625
  • [10] Multi-Source Distilling Domain Adaptation
    Zhao, Sicheng
    Wang, Guangzhi
    Zhang, Shanghang
    Gu, Yang
    Li, Yaxian
    Song, Zhichao
    Xu, Pengfei
    Hu, Runbo
    Chai, Hua
    Keutzer, Kurt
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 12975 - 12983