3.8 Proceedings Paper

Topic Discovery via Latent Space Clustering of Pretrained Language Model Representations

Journal

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3485447.3512034

Keywords

Topic Discovery; Pretrained Language Models; Clustering

Funding

  1. US DARPA KAIROS Program [FA8750-19-2-1004]
  2. INCAS Program [HR001121C0165]
  3. National Science Foundation [IIS-19-56151, IIS-17-41317, IIS 17-04532]
  4. Molecule Maker Lab Institute: An AI Research Institutes program - NSF [2019897]
  5. Google PhD Fellowship
  6. US DARPA SocialSim Program [W911NF-17-C-0099]

Ask authors/readers for more resources

This paper proposes a topic discovery method based on pretrained language models (PLMs), which are used in a joint latent space learning and clustering framework. The model effectively utilizes the representation power of PLMs for topic discovery and generates more coherent and diverse topics compared to strong topic models.
Topic models have been the prominent tools for automatic topic discovery from text corpora. Despite their effectiveness, topic models suffer from several limitations including the inability of modeling word ordering information in documents, the difficulty of incorporating external linguistic knowledge, and the lack of both accurate and efficient inference methods for approximating the intractable posterior. Recently, pretrained language models (PLMs) have brought astonishing performance improvements to a wide variety of tasks due to their superior representations of text. Interestingly, there have not been standard approaches to deploy PLMs for topic discovery as better alternatives to topic models. In this paper, we begin by analyzing the challenges of using PLM representations for topic discovery, and then propose a joint latent space learning and clustering framework built upon PLM embeddings. In the latent space, topic-word and document-topic distributions are jointly modeled so that the discovered topics can be interpreted by coherent and distinctive terms and meanwhile serve as meaningful summaries of the documents. Our model effectively leverages the strong representation power and superb linguistic features brought by PLMs for topic discovery, and is conceptually simpler than topic models. On two benchmark datasets in different domains, our model generates significantly more coherent and diverse topics than strong topic models, and offers better topic-wise document representations, based on both automatic and human evaluations.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available