Modality-specific and -shared Contrastive Learning for Sentiment Analysis
被引:0
|
作者:
Liu, Dahuang
论文数: 0引用数: 0
h-index: 0
机构:
Guangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou, Peoples R ChinaGuangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou, Peoples R China
Liu, Dahuang
[1
]
You, Jiuxiang
论文数: 0引用数: 0
h-index: 0
机构:
Guangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou, Peoples R ChinaGuangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou, Peoples R China
You, Jiuxiang
[1
]
Xie, Guobo
论文数: 0引用数: 0
h-index: 0
机构:
Guangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou, Peoples R ChinaGuangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou, Peoples R China
Xie, Guobo
[1
]
Lee, Lap-Kei
论文数: 0引用数: 0
h-index: 0
机构:
Hong Kong Metropolitan Univ, Sch Sci & Technol, Hong Kong, Peoples R ChinaGuangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou, Peoples R China
Lee, Lap-Kei
[2
]
Wang, Fu Lee
论文数: 0引用数: 0
h-index: 0
机构:
Hong Kong Metropolitan Univ, Sch Sci & Technol, Hong Kong, Peoples R ChinaGuangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou, Peoples R China
Wang, Fu Lee
[2
]
Yang, Zhenguo
论文数: 0引用数: 0
h-index: 0
机构:
Guangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou, Peoples R ChinaGuangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou, Peoples R China
Yang, Zhenguo
[1
]
机构:
[1] Guangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou, Peoples R China
[2] Hong Kong Metropolitan Univ, Sch Sci & Technol, Hong Kong, Peoples R China
In this paper, we propose a two-stage network with modality-specific and -shared contrastive learning (MMCL) for multimodal sentiment analysis. MMCL comprises a category-aware modality-specific contrastive (CMC) module and a self-decoupled modality-shared contrastive (SMC) module. In the first stage, the CMC module guides the encoders to extract modality-specific representations by constructing positive-negative pairs according to sample categories. In the second stage, the SMC module guides the encoders to extract modality-shared representations by constructing positive-negative pairs based on modalities and decoupling the self-contrast of all modalities. In the aforementioned modules, we leverage self-modulation factors to focus more on hard positive pairs through assigning different loss weights to positive pairs depending on their distance. In particular, we introduce a dynamic routing algorithm to cluster the inputs of the contrastive modules during training, where a gradient stopping strategy is utilized to isolate the backpropagation process of the CMC and SMC modules. Extensive experiments on the CMU-MOSI and CMU-MOSEI datasets show that MMCL achieves the state-of-the-art performance.