Generalized Funnelling: Ensemble Learning and Heterogeneous Document Embeddings for Cross-Lingual Text Classification

被引:1
|
作者
Moreo, Alejandro [1 ]
Pedrotti, Andrea [1 ]
Sebastiani, Fabrizio [1 ]
机构
[1] CNR, Ist Sci & Tecnol Informaz, Via Giuseppe Moruzzi 1, I-56124 Pisa, Italy
基金
欧盟地平线“2020”;
关键词
Transfer learning; heterogeneous transfer learning; cross-lingual text classification; ensemble learning; word embeddings; REPRESENTATION;
D O I
10.1145/3544104
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Funnelling (FUN) is a recently proposed method for cross-lingual text classification (CLTC) based on a two-tier learning ensemble for heterogeneous transfer learning (HTL). In this ensemble method, 1st-tier classifiers, each working on a different and language-dependent feature space, return a vector of calibrated posterior probabilities (with one dimension for each class) for each document, and the final classification decision is taken by a meta-classifier that uses this vector as its input. The meta-classifier can thus exploit class-class correlations, and this (among other things) gives FUN an edge over CLTC systems in which these correlations cannot be brought to bear. In this article, we describe Generalized FUNnelling (GFUN), a generalization of FUN consisting of an HTL architecture in which 1st-tier components can be arbitrary view-generating FUNctions, i.e., language-dependent FUNctions that each produce a language-independent representation ("view") of the (monolingual) document. We describe an instance of GFUN in which the meta-classifier receives as input a vector of calibrated posterior probabilities (as in FUN) aggregated to other embedded representations that embody other types of correlations, such as word-class correlations (as encoded by Word-Class Embeddings), word-word correlations (as encoded by Multilingual Unsupervised or Supervised Embeddings), and word-context correlations (as encoded by multilingual BERT). We show that this instance of GFUN substantially improves over FUN and over state-of-the-art baselines by reporting experimental results obtained on two large, standard datasets for multilingual multilabel text classification. Our code that implements GFUN is publicly available.
引用
收藏
页数:37
相关论文
共 50 条
  • [21] Unsupervised cross-lingual word embeddings learning with adversarial training
    Li, Yuling
    Zhang, Yuhong
    Li, Peipei
    Hu, Xuegang
    2019 10TH IEEE INTERNATIONAL CONFERENCE ON BIG KNOWLEDGE (ICBK 2019), 2019, : 150 - 156
  • [22] Cross-Lingual Document Similarity
    Muhic, Andrej
    Rupnik, Jan
    Skraba, Primoz
    PROCEEDINGS OF THE ITI 2012 34TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY INTERFACES (ITI), 2012, : 387 - 392
  • [23] Cross-lingual document clustering
    Wu, Ke
    Lu, Bao-Liang
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PROCEEDINGS, 2007, 4426 : 956 - +
  • [24] Active Learning for Cross-Lingual Sentiment Classification
    Li, Shoushan
    Wang, Rong
    Liu, Huanhuan
    Huang, Chu-Ren
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2013, 2013, 400 : 236 - 246
  • [25] Deep Multilabel Multilingual Document Learning for Cross-Lingual Document Retrieval
    Feng, Kai
    Huang, Lan
    Xu, Hao
    Wang, Kangping
    Wei, Wei
    Zhang, Rui
    ENTROPY, 2022, 24 (07)
  • [26] Cross-Lingual Classification of Political Texts Using Multilingual Sentence Embeddings
    Licht, Hauke
    POLITICAL ANALYSIS, 2023, 31 (03) : 366 - 379
  • [27] Cross-Lingual Text Categorization
    Bel, N
    Koster, CHA
    Villegas, M
    RESEARCH AND ADVANCED TECHNOLOGY FOR DIGITAL LIBRARIES, 2003, 2769 : 126 - 139
  • [28] Adversarial training with Wasserstein distance for learning cross-lingual word embeddings
    Li, Yuling
    Zhang, Yuhong
    Yu, Kui
    Hu, Xuegang
    APPLIED INTELLIGENCE, 2021, 51 (11) : 7666 - 7678
  • [29] Adversarial training with Wasserstein distance for learning cross-lingual word embeddings
    Yuling Li
    Yuhong Zhang
    Kui Yu
    Xuegang Hu
    Applied Intelligence, 2021, 51 : 7666 - 7678
  • [30] Best Practices for Learning Domain-Specific Cross-Lingual Embeddings
    Shakurova, Lena
    Nyari, Beata
    Li, Chao
    Rotaru, Mihai
    4TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP (REPL4NLP-2019), 2019, : 230 - 234