4.4 Article

Heterogeneity of rules in Bayesian reasoning: A toolbox analysis

Journal

COGNITIVE PSYCHOLOGY
Volume 143, Issue -, Pages -

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.cogpsych.2023.101564

Keywords

Bayesian reasoning; Cognitive modeling; Model competition; Process heterogeneity; Simple rules; Interindividual differences

Ask authors/readers for more resources

This study aimed to test two competing theoretical views on how people infer the Bayesian posterior probability: single-process theories and toolbox theories. Through analyzing data from a large number of participants, little support was found for the tested single-process theories. However, simulations showed that a single process, the weighing-and-adding model, could best fit the aggregate data and achieve the best out-of-sample prediction. Testing five non-Bayesian rules plus Bayes's rule, a toolbox was found to capture 64% of the inferences.
How do people infer the Bayesian posterior probability from stated base rate, hit rate, and false alarm rate? This question is not only of theoretical relevance but also of practical relevance in medical and legal settings. We test two competing theoretical views: single-process theories versus toolbox theories. Single-process theories assume that a single process explains people's inferences and have indeed been observed to fit people's inferences well. Examples are Bayes's rule, the representativeness heuristic, and a weighing-and-adding model. Their assumed process homogeneity implies unimodal response distributions. Toolbox theories, in contrast, assume process heterogeneity, implying multimodal response distributions. After analyzing response distributions in studies with laypeople and professionals, we find little support for the single-process theories tested. Using simulations, we find that a single process, the weighing -and-adding model, nevertheless can best fit the aggregate data and, surprisingly, also achieve the best out-of-sample prediction even though it fails to predict any single respondent's inferences. To identify the potential toolbox of rules, we test how well candidate rules predict a set of over 10,000 inferences (culled from the literature) from 4,188 participants and 106 different Bayesian tasks. A toolbox of five non-Bayesian rules plus Bayes's rule captures 64% of inferences. Finally, we validate the Five-Plus toolbox in three experiments that measure response times, self-reports, and strategy use. The most important conclusion from these analyses is that the fitting of single-process theories to aggregate data risks misidentifying the cognitive process. Antidotes to that risk are careful analyses of process and rule heterogeneity across people.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available