4.7 Article

Evaluating the accuracy and calibration of expert predictions under uncertainty: predicting the outcomes of ecological research

Journal

DIVERSITY AND DISTRIBUTIONS
Volume 18, Issue 8, Pages 782-794

Publisher

WILEY
DOI: 10.1111/j.1472-4642.2012.00884.x

Keywords

Calibration; expert elicitation; expert knowledge; overconfidence; subjective judgment; uncertainty

Ask authors/readers for more resources

Aim Expert knowledge routinely informs ecological research and decision-making. Its reliability is often questioned, but is rarely subject to empirical testing and validation. We investigate the ability of experts to make quantitative predictions of variables for which the answers are known. Location Global. Methods Experts in four ecological subfields were asked to make predictions about the outcomes of scientific studies, in the form of unpublished (in press) journal articles, based on information in the article introduction and methods sections. Estimates from students were elicited for one case study for comparison. For each variable, participants assessed a lower and upper bound, best guess and level of confidence that the observed value will lie within their ascribed interval. Responses were assessed for (1) accuracy: the degree to which predictions corresponded with observed experimental results, (2) informativeness: precision of the uncertainty bounds, and (3) calibration: degree to which the uncertainty bounds contained the truth as often as specified. Results Expert responses were found to be overconfident, specifying 80% confidence intervals that captured the truth only 4965% of the time. In contrast, student 80% intervals captured the truth 76% of the time, displaying close to perfect calibration. Best estimates from experts were on average more accurate than those from students. The best students outperformed the worst experts. No consistent relationships were observed between performance and years of experience, publication record or self-assessment of expertise. Main conclusions Experts possess valuable knowledge but may require training to communicate this knowledge accurately. Expert status is a poor guide to good performance. In the absence of training and information on past performance, simple averages of expert responses provide a robust counter to individual variation in performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available