4.6 Article

Prototype Theory Meets Word Embedding: A Novel Approach for Text Categorization via Granular Computing

期刊

COGNITIVE COMPUTATION
卷 15, 期 3, 页码 976-997

出版社

SPRINGER
DOI: 10.1007/s12559-023-10132-9

关键词

Conceptual spaces; Granular computing; Text classification; Word Embedding; Conceptual embedding; Long Short Term Memory

向作者/读者索取更多资源

This paper presents a novel framework for solving text categorization tasks using the Conceptual Space Theory, Granular Computing approach, and Machine Learning. The authors propose a concept-based representation of text and compare the performance of neural embedding techniques and LSA in knowledge discovery applications.
The problem of the information representation and interpretation coming from senses by the brain has plagued scientists for decades. The same problems, from a different perspective, hold in automated Pattern Recognition systems. Specifically, in solving various NLP tasks, an ever better and richer semantic representation of text as a set of features is needed and a plethora of text embedding techniques in algebraic spaces are continuously provided by researchers. These spaces are well suited to be conceived as conceptual spaces in light of the Gardenfors's Conceptual Space theory, which, within the Cognitive Science paradigm, seeks a geometrization of thought that bridges the gap between an associative lower level and a symbolic higher level in which information is organized and processed and where inductive reasoning is appropriate. Granular Computing can offer the toolbox for granulating text that can be represented by more abstract entities than words, offering a good hierarchical representation of the text embedded in an algebraic space driving Machine Learning applications, specifically, in text mining tasks. In this paper, the Conceptual Space Theory, the Granular Computing approach and Machine Learning are bound in a novel common framework for solving some text categorization tasks with both standard classifiers suited for working with Fin vectors and a Recurrent Neural Network (RNN) - an LSTM - able to deal with sequences. Instead of working with word vectors, the algorithms process more abstract entities (concepts), where patterns, in a first approach, are obtained through the construction of a symbolic histogram starting from a suitable set of information granules, representing a document as a distribution of concepts. For the RNN case, as a further novelty, a text is represented as a random walk over prototypes within the conceptual space synthesized over a suitable text embedding procedure. A comparison of the performance and a critical discussion are offered for both a neural embedding technique and the well-known LSA, showing how the conceptual level leads also to Knowledge Discovery applications.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据