4.3 Article

Biases in research evaluation: Inflated assessment, oversight, or error-type weighting?

Journal

JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY
Volume 43, Issue 4, Pages 633-640

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jesp.2006.06.001

Keywords

importance of research topic; research evaluation; type II errors; error-weighting; leniency; methodological soundness; publishability

Ask authors/readers for more resources

Reviewers of research are more lenient when evaluating studies on important topics [Wilson, T. D., Depaulo, B. M., Mook, D. G., & Klaaren, K. J. (1993). Scientists' evaluations of research: the biasing effects of the importance of the topic. Psychological Science, 4(5), 323325]. Three experiments (N= 145, 36, and 91 psychologists) investigated different explanations of leniency, including inflation of assessments (applying a heuristic associating importance with quality), oversight (failing to detect flaws), and error-weighting (prioritizing Type 11 error avoidance). In Experiment 1, psychologists evaluated the publishability and rigor of studies in a 2 (topic importance) x 2 (accuracy motivation) x 2 (research domain) design. Experiment 2 featured an exact replication of Wilson et al. and suggested that report length moderated the effects of importance on perceived rigor, but not on publishability. In Experiment 3, a manipulation of error-weighting replaced the manipulation of domain (Experiment 1). Results favored error-weighting, rather than inflation or oversight. Perceived seriousness of Type 11 error (in Experiments I and 3) and the error-weighting manipulation (in Experiment 3) predicted study evaluations. (C) 2006 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available