期刊
INTERACTIVE LEARNING ENVIRONMENTS
卷 30, 期 2, 页码 215-228出版社
ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD
DOI: 10.1080/10494820.2019.1651743
关键词
Answer grading; educational assessment; automatic evaluation; student writing assessment; text similarity
资金
- Science and Engineering Research Board (SERB) [YSS/2015/001948]
This paper proposes a practical system for grading long or descriptive answers in a small class scenario. The system uses an expert-written reference answer and computes the similarity of a student answer with it. Experimental results demonstrate the system's high accuracy in evaluating answers.
Assessment plays an important role in education. Recently proposed machine learning-based systems for answer grading demand a large training data which is not available in many application areas. Creation of sufficient training data is costly and time-consuming. As a result, automatic long answer grading is still a challenge. In this paper, we propose a practical system for long or descriptive answer grading that can assess in a small class scenario. The system uses an expert-written reference answer and computes the similarity of a student answer with it. For the similarity computation, it uses several word level and sentence level similarity measures including TFIDF, Latent Semantic Indexing, Latent Dirichlet Analysis, TextRank summarizer, and neural sentence embedding-based InferSent. The student answer might contain certain facts that do not occur in the model answer. The system identifies such sentences, examine their relevance and correctness, and assigns extra marks accordingly. In the final phase, the system uses a clustering-based confidence analysis. The system is tested on an assessment of school-level social science answer books. The experimental results demonstrate that the system evaluates the answer books with high accuracy, the best root mean square error value is 0.59 on a 0-5 scoring scale.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据