4.4 Article

Human versus Computer Partner in the Paired Oral Discussion Test

期刊

APPLIED LINGUISTICS
卷 42, 期 5, 页码 924-944

出版社

OXFORD UNIV PRESS
DOI: 10.1093/applin/amaa067

关键词

-

资金

  1. ETS under a Committee of Examiners and the Test of English as a Foreign Language research grant

向作者/读者索取更多资源

This study explores the challenges of large-scale oral communication assessments and suggests that using a spoken dialog system can be a feasible approach. The results show that the computer partner is more reliable in assessing interactional competence, but human raters generally prefer the human partner due to its perceived authenticity and naturalness.
A challenge of large-scale oral communication assessments is to feasibly assess a broad construct that includes interactional competence. One possible approach in addressing this challenge is to use a spoken dialog system (SDS), with the computer acting as a peer to elicit a ratable speech sample. With this aim, an SDS was built and four trained human raters assessed the discourse elicited from 40 test takers that completed a paired oral task with both a human and a computer partner. The test takers were evaluated based on the analytic operational oral communication rating scales which included interactional competence, fluency, pronunciation, and grammar/vocabulary. Repeated-measures ANOVA indicated that fluency, pronunciation, and grammar and vocabulary were scored similarly across the two conditions, while interactional competence was scored substantially higher in the human partner condition. A g-study indicated that the computer partner was more reliable in assessing interactional competence, and rater questionnaire and interview data suggested the computer provided a more standardized assessment. Conversely, raters generally favored the human partner, in part because of its perceived authenticity and naturalness.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据