4.7 Article

A comparison of human and computer marking of short free-text student responses

Journal

COMPUTERS & EDUCATION
Volume 55, Issue 2, Pages 489-499

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.compedu.2010.02.012

Keywords

Authoring tools and methods

Funding

  1. UK Higher Education Funding Council via Centre for Open Learning of Mathematics, Computing, Science and Technology (COLMSCT)

Ask authors/readers for more resources

The computer marking of short-answer free-text responses of around a sentence in length has been found to be at least as good as that of six human markers. The marking accuracy of three separate computerised systems has been compared, one system (Intelligent Assessment Technologies FreeText Author) is based on computational linguistics whilst two (Regular Expressions and OpenMark) are based on the algorithmic manipulation of keywords. in all three cases, the development of high-quality response matching has been achieved by the use of real student responses to developmental versions of the questions and FreeText Author and OpenMark have been found to produce marking of broadly similar accuracy. Reasons for lack of accuracy in human marking and in each of the computer systems are discussed. (C) 2010 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available