4.5 Article

The Generalization Ability of Online Algorithms for Dependent Data

期刊

IEEE TRANSACTIONS ON INFORMATION THEORY
卷 59, 期 1, 页码 573-587

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIT.2012.2212414

关键词

Dependent observations; generalization bounds; linear prediction; online learning; statistical learning theory

资金

  1. Microsoft Research Ph.D. Fellowship
  2. Google Ph.D. Fellowship
  3. Department of Defense through a National Defense Science and Engineering Graduate Fellowship

向作者/读者索取更多资源

We study the generalization performance of online learning algorithms trained on samples coming from a dependent source of data. We show that the generalization error of any stable online algorithm concentrates around its regret-an easily computable statistic of the online performance of the algorithm-when the underlying ergodic process is beta- or phi-mixing. We show high-probability error bounds assuming the loss function is convex, and we also establish sharp convergence rates and deviation bounds for strongly convex losses and several linear prediction problems such as linear and logistic regression, least-squares SVM, and boosting on dependent data. In addition, our results have straightforward applications to stochastic optimization with dependent data, and our analysis requires only martingale convergence arguments; we need not rely on more powerful statistical tools such as empirical process theory.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据