4.3 Article

Variations in Reliability and Validity Do Not Influence Judge, Attorney, and Mock Juror Decisions About Psychological Expert Evidence

Journal

LAW AND HUMAN BEHAVIOR
Volume 43, Issue 6, Pages 542-557

Publisher

AMER PSYCHOLOGICAL ASSOC
DOI: 10.1037/lhb0000345

Keywords

cross-examination; decision making; jurors; expert evidence

Funding

  1. National Science Foundation (NSF) [SES-1155251]

Ask authors/readers for more resources

Objective: We tested whether the reliability and validity of psychological testing underlying an expert's opinion influenced judgments made by judges, attorneys, and mock jurors. Hypotheses: We predicted that the participants would judge the expert's evidence more positively when it had high validity and high reliability. Method: In Experiment 1, judges (N = 111) and attorneys (N = 95) read a summary of case facts and proffer of expert testimony on the intelligence of a litigant. The psychological testing varied in scientific quality; either there was (a) blind administration (i.e., the psychologist did not have an expectation for the test result) of a highly reliable test, (b) nonblind administration (i.e., the psychologist did have an expectation for the test result) of a highly reliable test, or (c) blind administration of a test with low reliability. In a trial simulation (Experiment 2), we varied the scientific quality of the intelligence test and whether the cross-examination addressed the scientific quality of the test. Results: The variations in scientific quality did not influence judges' admissibility decisions nor their ratings of scientific quality nor did it influence attorneys' decisions about whether to move to exclude the evidence. Attorneys' ratings of scientific quality were sensitive to variations in reliability but not the testing conditions. Scientifically informed cross-examinations did not help mock jurors (N = 192) evaluate the validity or the reliability of a psychological test. Conclusion: Cross-examination was an ineffective method for educating jurors about problems associated with nonblind testing and reliability, which highlights the importance of training judges to evaluate the quality of expert evidence.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available