4.5 Article

Generalized Funnelling: Ensemble Learning and Heterogeneous Document Embeddings for Cross-Lingual Text Classification

Journal

ACM TRANSACTIONS ON INFORMATION SYSTEMS
Volume 41, Issue 2, Pages -

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3544104

Keywords

Transfer learning; heterogeneous transfer learning; cross-lingual text classification; ensemble learning; word embeddings

Ask authors/readers for more resources

Funnelling is a method for cross-lingual text classification based on a two-tier learning ensemble. It uses a meta-classifier to exploit class-class correlations, giving it an edge over other systems.
Funnelling (FUN) is a recently proposed method for cross-lingual text classification (CLTC) based on a two-tier learning ensemble for heterogeneous transfer learning (HTL). In this ensemble method, 1st-tier classifiers, each working on a different and language-dependent feature space, return a vector of calibrated posterior probabilities (with one dimension for each class) for each document, and the final classification decision is taken by a meta-classifier that uses this vector as its input. The meta-classifier can thus exploit class-class correlations, and this (among other things) gives FUN an edge over CLTC systems in which these correlations cannot be brought to bear. In this article, we describe Generalized FUNnelling (GFUN), a generalization of FUN consisting of an HTL architecture in which 1st-tier components can be arbitrary view-generating FUNctions, i.e., language-dependent FUNctions that each produce a language-independent representation (view) of the (monolingual) document. We describe an instance of GFUN in which the meta-classifier receives as input a vector of calibrated posterior probabilities (as in FUN) aggregated to other embedded representations that embody other types of correlations, such as word-class correlations (as encoded by Word-Class Embeddings), word-word correlations (as encoded by Multilingual Unsupervised or Supervised Embeddings), and word-context correlations (as encoded by multilingual BERT). We show that this instance of GFUN substantially improves over FUN and over state-of-the-art baselines by reporting experimental results obtained on two large, standard datasets for multilingual multilabel text classification. Our code that implements GFUN is publicly available.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available