期刊
IEEE TRANSACTIONS ON INFORMATION THEORY
卷 46, 期 2, 页码 431-445出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/18.825803
关键词
Jeffreys' prior; minimax redundancy; minimax regret; universal coding; universal prediction
For problems of data compression, gambling, and prediction of individual sequences x(1),...,x(n) the following questions arise. Given a target family of probability mass functions p(x(1),...,x(n)\theta), how do we choose a probability mass function p(x(1),...,x(n)) so that it approximately minimizes the maximum regret /belowdisplayskip10ptminus6pt max(x1,...,xn)(log1/q(x(1),...,x(n))-log1/p(x(1),...,x(n)\<(theta)over cap>) and so that it achieves the best constant C in the asymptotics of the minimax regret, which is of the form (d/2)log(n/2 pi)+C+o(1), where d is the parameter dimension? Are there easily implementable strategies q that achieve those asymptotics? And how does the solution to the worst case sequence problem relate to the solution to the corresponding expectation version min(q)max(theta)E(theta)(log 1/q(x(1),...,x(n))-log1/p(x(1),...,x(n)/theta))? In the discrete memoryless case, with a given alphabet of size m, the Bayes procedure with the Dirichlet(1/2,...,1/2) prior is asymptotically maximin. Simple modifications of It are shown to be asymptotically minimax. The best constant is C-m=log(Gamma(1/2)(m)/(Gamma(m/2)) which agrees with the logarithm of the integral of the square root of the determinant of the Fisher information. Moreover, our asymptotically optimal strategies for the worst case problem are also asymptotically optimal for the expectation version. Analogous conclusions are given for the case of prediction, gambling, and compression when, for each observation, one has access to side information from an alphabet of size k, In this setting the minimax regret is shown to be k(m-1)/2logn/2 pi k+kC(m)+o(1).
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据