4.4 Article

Normalizing the Use of Single-Item Measures: Validation of the Single-Item Compendium for Organizational Psychology

Journal

JOURNAL OF BUSINESS AND PSYCHOLOGY
Volume 37, Issue 4, Pages 639-673

Publisher

SPRINGER
DOI: 10.1007/s10869-022-09813-3

Keywords

Single-item measure; Validity; Reliability; Organizational sciences

Ask authors/readers for more resources

The application of single-item measures has the potential to help researchers in addressing challenges in conceptualizing, methodology, and empirical studies. Through a large-scale evidence-based approach, this study examines the reliability and validity of single-item measures in assessing various constructs in organizational sciences. Findings suggest that a significant portion of the examined measures demonstrate strong definitional correspondence, little to no comprehension or usability concerns, and good to extensive criterion validity.
The application of single-item measures has the potential to help applied researchers address conceptual, methodological, and empirical challenges. Based on a large-scale evidence-based approach, we empirically examined the degree to which various constructs in the organizational sciences can be reliably and validly assessed with a single item. In study 4, across 91 selected constructs, 71.4% of the single-item measures demonstrated strong if not very strong definitional correspondence (as a measure of content validity). In study 9, based on a heterogeneous sample of working adults, we demonstrate that the majority of single-item measures examined demonstrated little to no comprehension or usability concerns. Study 15 provides evidence for the reliability of the proposed single-item measures based on test-retest reliabilities across the three temporal conditions (1 day, 2 weeks, 1 month). In study 18, we examined issues of construct and criterion validity using a multi-trait, multi-method approach. Collectively, 75 of the 91 focal measures demonstrated very good or extensive validity, evidencing moderate to high content validity, no usability concerns, moderate to high test-retest reliability, and extensive criterion validity. Finally, in study 24, we empirically examined the argument that only conceptually narrow constructs can be reliably and validly assessed with single-item measures. Results suggest that there is no relationship between subject matter expert evaluations of construct breadth and reliability and validity evidence collected across the first four studies. Beyond providing an off-the-shelf compendium of validated single-item measures, we abstract our validation steps providing a roadmap to replicate and build upon. Limitations and future directions are discussed.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available