Exploring Multi-lingual, Multi-task, and Adversarial Learning for Low-resource Sentiment Analysis

被引:6
|
作者
Mamta [1 ]
Ekbal, Asif [1 ]
Bhattacharyya, Pushpak [1 ]
机构
[1] Indian Inst Technol Patna, Patna, Bihar, India
关键词
Sentiment analysis; low-resource language; multi-task; multi-lingual; adversarial training; PRODUCTS; LEXICON; SET;
D O I
10.1145/3514498
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has become most prominent in solving various Natural Language Processing (NLP) tasks including sentiment analysis. However, these techniques require a considerably large amount of annotated corpus, which is not easy to obtain for most of the languages, especially under the scenario of low-resource settings. In this article, we propose a deep multi-task multi-lingual adversarial framework to solve the resource-scarcity problem of sentiment analysis by leveraging the useful and relevant knowledge from a high-resource language. To transfer the knowledge between the different languages, both the languages are mapped to the shared semantic space using cross-lingual word embeddings. We evaluate our proposed architecture on a low-resource language, Hindi, using English as the high-resource language. Experiments show that our proposed model achieves an accuracy of 60.09% for the movie review dataset and 72.14% for the product review dataset. The effectiveness of our proposed approach is demonstrated with significant performance gains over the state-of-the-art systems and translation-based baselines.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] A Multi-lingual Multi-task Architecture for Low-resource Sequence Labeling
    Lin, Ying
    Yang, Shengqi
    Stoyanov, Veselin
    Ji, Heng
    [J]. PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, 2018, : 799 - 809
  • [2] One "Ruler" for All Languages: Multi-Lingual Dialogue Evaluation with Adversarial Multi-Task Learning
    Tong, Xiaowei
    Fu, Zhenxin
    Shang, Mingyue
    Zhao, Dongyan
    Yan, Rui
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 4432 - 4438
  • [3] Multi-lingual and Multi-task DNN Learning for Articulatory Error Detection
    Duan, Richeng
    Kawahara, Tatsuya
    Dantsuji, Masatake
    Zhang, Jinsong
    [J]. 2016 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA), 2016,
  • [4] Adversarial Training for Multi-task and Multi-lingual Joint Modeling of Utterance Intent Classification
    Masumura, Ryo
    Shinohara, Yusuke
    Higashinaka, Ryuichiro
    Aono, Yushi
    [J]. 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 633 - 639
  • [5] Improving Low-Resource Chinese Event Detection with Multi-task Learning
    Tong, Meihan
    Xu, Bin
    Wang, Shuai
    Hou, Lei
    Li, Juaizi
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT (KSEM 2020), PT I, 2020, 12274 : 421 - 433
  • [6] A review on multi-lingual sentiment analysis by machine learning methods
    Sagnika, Santwana
    Pattanaik, Anshuman
    Mishra, Bhabani Shankar Prasad
    Meher, Saroj K.
    [J]. Journal of Engineering Science and Technology Review, 2020, 13 (02) : 154 - 166
  • [7] Deep Learning Model for Sentiment Analysis in Multi-lingual Corpus
    Medrouk, Lisa
    Pappa, Anna
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2017, PT I, 2017, 10634 : 205 - 212
  • [8] Exploring Multi-Task Multi-Lingual Learning of Transformer Models for Hate Speech and Offensive Speech Identification in Social Media
    Mishra S.
    Prasad S.
    Mishra S.
    [J]. SN Computer Science, 2021, 2 (2)
  • [9] Transfer Learning from Multi-Lingual Speech Translation Benefits Low-Resource Speech Recognition
    Vanderreydt, Geoffroy
    Remy, Francois
    Demuynck, Kris
    [J]. INTERSPEECH 2022, 2022, : 3053 - 3057
  • [10] MULTI-LINGUAL SPEECH RECOGNITION WITH LOW-RANK MULTI-TASK DEEP NEURAL NETWORKS
    Mohan, Aanchan
    Rose, Richard
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP), 2015, : 4994 - 4998