4.6 Article

Prototype Theory Meets Word Embedding: A Novel Approach for Text Categorization via Granular Computing

Journal

COGNITIVE COMPUTATION
Volume 15, Issue 3, Pages 976-997

Publisher

SPRINGER
DOI: 10.1007/s12559-023-10132-9

Keywords

Conceptual spaces; Granular computing; Text classification; Word Embedding; Conceptual embedding; Long Short Term Memory

Ask authors/readers for more resources

This paper presents a novel framework for solving text categorization tasks using the Conceptual Space Theory, Granular Computing approach, and Machine Learning. The authors propose a concept-based representation of text and compare the performance of neural embedding techniques and LSA in knowledge discovery applications.
The problem of the information representation and interpretation coming from senses by the brain has plagued scientists for decades. The same problems, from a different perspective, hold in automated Pattern Recognition systems. Specifically, in solving various NLP tasks, an ever better and richer semantic representation of text as a set of features is needed and a plethora of text embedding techniques in algebraic spaces are continuously provided by researchers. These spaces are well suited to be conceived as conceptual spaces in light of the Gardenfors's Conceptual Space theory, which, within the Cognitive Science paradigm, seeks a geometrization of thought that bridges the gap between an associative lower level and a symbolic higher level in which information is organized and processed and where inductive reasoning is appropriate. Granular Computing can offer the toolbox for granulating text that can be represented by more abstract entities than words, offering a good hierarchical representation of the text embedded in an algebraic space driving Machine Learning applications, specifically, in text mining tasks. In this paper, the Conceptual Space Theory, the Granular Computing approach and Machine Learning are bound in a novel common framework for solving some text categorization tasks with both standard classifiers suited for working with Fin vectors and a Recurrent Neural Network (RNN) - an LSTM - able to deal with sequences. Instead of working with word vectors, the algorithms process more abstract entities (concepts), where patterns, in a first approach, are obtained through the construction of a symbolic histogram starting from a suitable set of information granules, representing a document as a distribution of concepts. For the RNN case, as a further novelty, a text is represented as a random walk over prototypes within the conceptual space synthesized over a suitable text embedding procedure. A comparison of the performance and a critical discussion are offered for both a neural embedding technique and the well-known LSA, showing how the conceptual level leads also to Knowledge Discovery applications.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available