期刊
INTERNATIONAL JOURNAL OF RESEARCH IN MARKETING
卷 39, 期 1, 页码 117-133出版社
ELSEVIER
DOI: 10.1016/j.ijresmar.2021.05.001
关键词
Online samples; Sampling; MTurk; Screening; Data quality; Integrity
类别
This study proposes a framework based on sampling goal and methodology for screening and evaluating the quality of online samples. The comparison between different samples suggests the need for screening in every online sample, particularly for MTurk samples.
Increasingly, marketing and consumer researchers rely on online data collection services. While actively-managed data collection services directly assist with the sampling process, minimally-managed data collection services, such as Amazon's Mechanical Turk (MTurk), leave researchers solely responsible for recruiting, screening, cleaning, and evaluating responses. The research reported here proposes a 2 x 2 framework based on sampling goal and methodology for screening and evaluating the quality of online samples. By sampling goals, screeners can be categorized as selection, which involves matching the sample with the targeted population; or as accuracy, which involves ensuring that participants are appropriately attentive. By methodology, screeners can be categorized as direct, which screens individual responses; and as statistical, which provides quantitative signals of low quality. Multiple screeners for each of the four categories are compared across three MTurk samples, two actively-managed data collection samples (Qualtrics and Dynata), and a student sample. The results suggest the need for screening in every online sample, particularly for the MTurk samples, with the fewest supplier-provided filters. Recommendations are provided for researchers and journal reviewers that provide greater transparency with respect to sample practices. (c) 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据