4.3 Article

Validity of Scores for a Developmental Writing Scale Based on Automated Scoring

Journal

EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT
Volume 69, Issue 6, Pages 978-993

Publisher

SAGE PUBLICATIONS INC
DOI: 10.1177/0013164409332217

Keywords

automated scoring; writing ability; developmental scale

Ask authors/readers for more resources

A developmental writing scale for timed essay-writing performance was created on the basis of automatically computed indicators of writing fluency, word choice, and conventions of standard written English. In a large-scale data collection effort that involved a national sample of more than 12,000 students from 4th, 6th, 8th, 10th, and 12th grade, students wrote (in 30-min sessions) up to four essays in two modes of writing on topics selected from a pool of 20 topics. Scale scores were created by combining essay indicators in a standard way to compute essay scores that shared the same scoring standards across essay prompts and student grade levels. A series of ancillary analyses and studies were conducted to examine the validity of scale scores. Cross-classified random effects modeling of scores confirmed that the particular prompts on which essays are written have little effect on scores. The reliability of scores was found to be higher compared to previous reliability estimates of human essay scores. A human scoring experiment confirmed that the developmental sensitivity of scale scores and human scores was similar. A longitudinal study confirmed the expected gains in scores over a 1-year period.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available