4.3 Article

When the state of the art is counting words

期刊

Assessing Writing
卷 21, 期 -, 页码 104-111

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.asw.2014.05.001

关键词

Automated essay scoring; Common Core standard; Essay length; High-stakes assessment; Race-to-the-top; Human raters

向作者/读者索取更多资源

The recent article in this journal State-of-the-art automated essay scoring: Competition results and future directions from a United States demonstration by Shermis ends with the claims: Automated essay scoring appears to have developed to the point where it can consistently replicate the resolved scores of human raters in high-stakes assessment. While the average performance of vendors does not always match the performance of human raters, the results of the top two to three vendors was consistently good and occasionally exceeded human rating performance. These claims are not supported by the data in the study, while the study's raw data provide clear and irrefutable evidence that Automated Essay Scoring engines grossly and consistently over-privilege essay length in computing student writing scores. The state-of-the-art referred to in the title of the article is, largely, simply counting words. (C) 2014 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据