4.3 Article

Biases in research evaluation: Inflated assessment, oversight, or error-type weighting?

期刊

JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY
卷 43, 期 4, 页码 633-640

出版社

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jesp.2006.06.001

关键词

importance of research topic; research evaluation; type II errors; error-weighting; leniency; methodological soundness; publishability

向作者/读者索取更多资源

Reviewers of research are more lenient when evaluating studies on important topics [Wilson, T. D., Depaulo, B. M., Mook, D. G., & Klaaren, K. J. (1993). Scientists' evaluations of research: the biasing effects of the importance of the topic. Psychological Science, 4(5), 323325]. Three experiments (N= 145, 36, and 91 psychologists) investigated different explanations of leniency, including inflation of assessments (applying a heuristic associating importance with quality), oversight (failing to detect flaws), and error-weighting (prioritizing Type 11 error avoidance). In Experiment 1, psychologists evaluated the publishability and rigor of studies in a 2 (topic importance) x 2 (accuracy motivation) x 2 (research domain) design. Experiment 2 featured an exact replication of Wilson et al. and suggested that report length moderated the effects of importance on perceived rigor, but not on publishability. In Experiment 3, a manipulation of error-weighting replaced the manipulation of domain (Experiment 1). Results favored error-weighting, rather than inflation or oversight. Perceived seriousness of Type 11 error (in Experiments I and 3) and the error-weighting manipulation (in Experiment 3) predicted study evaluations. (C) 2006 Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据