4.7 Article

Revolutionizing subjective assessments: A three-pronged comprehensive approach with NLP and deep learning

期刊

EXPERT SYSTEMS WITH APPLICATIONS
卷 239, 期 -, 页码 -

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2023.122470

关键词

Deep Neural Networks (DNN); Natural Language Processing (NLP); Question answering; Yet Another Keyword Extractor (YAKE); KeyBERT; Simple Contrastive Sentence Embedding; Framework (simCSE); Camembert; Sentence Bidirectional Encoder; Representations from Transformers (SBERT)

向作者/读者索取更多资源

The enhanced answer evaluation system is an automated tool that utilizes Natural Language Processing (NLP) and deep learning techniques to evaluate the accuracy of subjective answers. It leverages various criteria such as keywords, similarity, and named entity recognition to provide precise evaluation scores. The system demonstrates remarkable performance in evaluating long answers and sets a new standard in the field.
The enhanced answer evaluation system is a cutting-edge automated tool that evaluates subjective answers in various contexts, such as educational assessments, surveys, and feedback forms. The proposed system leverages Natural Language Processing (NLP) and deep learning techniques to analyse subjective answers and provide evaluation scores with precision. Students' answers are evaluated based on various criteria, such as keywords, context, relevance, coherence, and similarity. This paper introduces an architecture for a subjective answer evaluator using three main aspects: detection of keywords, similarity matrix, and presence of named entities. It combines the three aspects and provides a final score. It provides a standardized mechanism to score a given user answer compared to the particular model answer without human prejudice. This research aims to transcend traditional methodologies that predominantly utilize keyword or keyphrase scoring (text-based similarity) to determine the final score of an answer without delving into its technical intricacies. The semantic similarity (vector-based) employs vector data representations for score calculation. This approach necessitates partitioning data into multiple vectors for a comprehensive analysis. While text similarity is effective for short answers, its efficacy diminishes as the length of the answer increases. Therefore, this study emphasizes the critical role of similarity scoring and Named Entity Recognition (NER) scoring in evaluating more extended responses based on the stsb-en-main dataset (short answers) and a custom dataset with 190 records. This research reveals its remarkable performance, which excels through a dynamic three-pronged approach: keyword scoring, semantic similarity, and NER scoring with models like Yet Another Keyword Extractor (YAKE), SimCSE and Camembert. These three independent components synergize to produce unmatched results, establishing a new standard in the field. This enhancement led to Root Mean Square Error (RMSE) scores of 0.031 (optimized error rate) and an impressive 71%+ accuracy for our comprehensive system. This achievement surpasses existing works, which typically reached accuracies ranging between 40%-60% for long answers.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据