4.0 Article

The effectiveness of machine score-ability ratings in predicting automated scoring performance

Journal

APPLIED MEASUREMENT IN EDUCATION
Volume 31, Issue 3, Pages 215-232

Publisher

ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD
DOI: 10.1080/08957347.2018.1464452

Keywords

-

Ask authors/readers for more resources

This study sought to provide a framework for evaluating machine score-ability of items using a new score-ability rating scale, and to determine the extent to which ratings were predictive of observed automated scoring performance. The study listed and described a set of factors that are thought to influence machine score-ability; these factors informed the score-ability rating applied by expert raters. Five Reading items, six Science items, and 10 Math items were examined. Experts in automated scoring served as reviewers, providing independent ratings of score-ability before engine calibration. Following the rating, engines were calibrated and their performances were evaluated using common industry criteria. Three derived criteria from the engine evaluations were computed: the score-ability value in the rating scale based on the empirical results, the number of industry evaluation criteria met by the engine, the approval status of the engine based on the number of criteria met. The results indicated that the score-ability ratings were moderately correlated with Science score-ability, the ratings were weakly correlated with Math score-ability, and were not correlated with Reading score-ability.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.0
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available