4.4 Article

STATISTICAL PARADISES AND PARADOXES IN BIG DATA (I): LAW OF LARGE POPULATIONS, BIG DATA PARADOX, AND THE 2016 US PRESIDENTIAL ELECTION

期刊

ANNALS OF APPLIED STATISTICS
卷 12, 期 2, 页码 685-726

出版社

INST MATHEMATICAL STATISTICS
DOI: 10.1214/18-AOAS1161SF

关键词

Bias-variance tradeoff; data defect correlation; data defect index (d.d.i.); data confidentiality and privacy; data quality-quantity tradeoff; Euler identity; Monte Carlo and Quasi Monte Carlo (MCQMC); non-response bias

资金

  1. US National Science Foundation
  2. John Templeton Foundation

向作者/读者索取更多资源

Statisticians are increasingly posed with thought-provoking and even paradoxical questions, challenging our qualifications for entering the statistical paradises created by Big Data. By developing measures for data quality, this article suggests a framework to address such a question: Which one should I trust more: a 1% survey with 60% response rate or a self-reported administrative dataset covering 80% of the population? A 5-element Eulerformula-like identity shows that for any dataset of size n, probabilistic or not, the difference between the sample average (X) over bar (n) and the population average (X) over bar (N) is the product of three terms: (1) a data quality measure, rho(R, X), the correlation between X-j and the response/recording indicator R-j; (2) a data quantity measure, root(N - n)/n, where N is the population size; and (3) a problem difficulty measure, sigma(X), the standard deviation of X. This decomposition provides multiple insights: (I) Probabilistic sampling ensures high data quality by controlling rho(R, X) at the level of N-1/2; (II) When we lose this control, the impact of N is no longer canceled by rho(R, X), leading to a Law of Large Populations (LLP), that is, our estimation error, relative to the benchmarking rate 1/root n, increases with root N; and (III) the bigness of such Big Data (for population inferences) should be measured by the relative size f = n/N, not the absolute size n; (IV) When combining data sources for population inferences, those relatively tiny but higher quality ones should be given far more weights than suggested by their sizes. Estimates obtained from the Cooperative Congressional Election Study (CCES) of the 2016 US presidential election suggest a rho(R, X) approximate to -0.005 for self-reporting to vote for Donald Trump. Because of LLP, this seemingly minuscule data defect correlation implies that the simple sample proportion of the self-reported voting preference for Trump from 1% of the US eligible voters, that is, n approximate to 2,300,000, has the same mean squared error as the corresponding sample proportion from a genuine simple random sample of size n approximate to 400, a 99.98% reduction of sample size (and hence our confidence). The CCES data demonstrate LLP vividly: on average, the larger the state's voter populations, the further away the actual Trump vote shares from the usual 95% confidence intervals based on the sample proportions. This should remind us that, without taking data quality into account, population inferences with Big Data are subject to a Big Data Paradox: the more the data, the surer we fool ourselves.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据