Journal
EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT
Volume 76, Issue 2, Pages 280-303Publisher
SAGE PUBLICATIONS INC
DOI: 10.1177/0013164415590022
Keywords
computer-based assessment; automatic coding; automatic short-answer grading; computer-automated scoring
Ask authors/readers for more resources
Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the Programme for International Student Assessment (PISA) 2012 in Germany. Free text responses of 10 items with n = 41, 990 responses in total were analyzed. We further examined the effect of different methods, parameter values, and sample sizes on performance of the implemented system. The system reached fair to good up to excellent agreement with human codings (.458 <= kappa <= .959): Especially items that are solved by naming specific semantic concepts appeared properly coded. The system performed equally well with n >= 1, 661 and somewhat poorer but still acceptable down to n = 249. Based on our findings, we discuss potential innovations for assessment that are enabled by automatic coding of short text responses.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available