3.8 Proceedings Paper

Rethinking and Refining the Distinct Metric

Publisher

ASSOC COMPUTATIONAL LINGUISTICS-ACL

Keywords

-

Funding

  1. National Science Foundation for Distinguished Young Scholars [62125604]
  2. NSFC projects [61936010, 61876096]
  3. Guoqiang Institute of Tsinghua University [2019GQG1, 2020GQG0005]

Ask authors/readers for more resources

Distinct-n score is a widely used automatic metric for evaluating diversity in language generation tasks. However, the original approach has biases that tend to assign higher penalties to longer sequences. We propose a method that effectively removes these biases and correlates better with human judgment.
Distinct-n score(Li et al., 2016) is a widely used automatic metric for evaluating diversity in language generation tasks. However, we observed that the original approach for calculating distinct scores has evident biases that tend to assign higher penalties to longer sequences. We refine the calculation of distinct scores by scaling the number of distinct tokens based on their expectations. We provide both empirical and theoretical evidence to show that our method effectively removes the biases existing in the original distinct score. Our experiments show that our proposed metric, Expectation-Adjusted Distinct (EAD), correlates better with human judgment in evaluating response diversity. To foster future research, we provide an example implementation at https://github.com/lsy641/Expectation-Adjusted-Distinct.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available